<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Kubernetes Contributors – Contributor Blog</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/</link><description>Recent content in Contributor Blog on Kubernetes Contributors</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Thu, 12 Feb 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/index.xml" rel="self" type="application/rss+xml"/><item><title>Blog: Spotlight on SIG Architecture: API Governance</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2026/02/12/sig-architecture-api/</link><pubDate>Thu, 12 Feb 2026 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2026/02/12/sig-architecture-api/</guid><description>
&lt;p>&lt;em>This is the fifth interview of a SIG Architecture Spotlight series that covers the different
subprojects, and we will be covering &lt;a href="https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#architecture-and-api-governance-1">SIG Architecture: API
Governance&lt;/a>.&lt;/em>&lt;/p>
&lt;p>In this SIG Architecture spotlight we talked with &lt;a href="https://github.com/liggitt">Jordan Liggitt&lt;/a>, lead
of the API Governance sub-project.&lt;/p>
&lt;h2 id="introduction">Introduction&lt;/h2>
&lt;p>&lt;strong>FM: Hello Jordan, thank you for your availability. Tell us a bit about yourself, your role and how
you got involved in Kubernetes.&lt;/strong>&lt;/p>
&lt;p>&lt;strong>JL&lt;/strong>: My name is Jordan Liggitt. I&amp;rsquo;m a Christian, husband, father of four, software engineer at
&lt;a href="https://about.google/">Google&lt;/a> by day, and &lt;a href="https://www.youtube.com/watch?v=UDdr-VIWQwo">amateur musician&lt;/a> by stealth. I was born in Texas (and still
like to claim it as my point of origin), but I&amp;rsquo;ve lived in North Carolina for most of my life.&lt;/p>
&lt;p>I&amp;rsquo;ve been working on Kubernetes since 2014. At that time, I was working on authentication and
authorization at Red Hat, and my very first pull request to Kubernetes attempted to &lt;a href="https://github.com/kubernetes/kubernetes/pull/2328">add an OAuth
server&lt;/a> to the Kubernetes API server. It never
exited work-in-progress status. I ended up going with a different approach that layered on top of
the core Kubernetes API server in a different project (spoiler alert: this is foreshadowing), and I
closed it without merging six months later.&lt;/p>
&lt;p>Undeterred by that start, I stayed involved, helped build Kubernetes authentication and
authorization capabilities, and got involved in the definition and evolution of the core Kubernetes
APIs from early beta APIs, like &lt;code>v1beta3&lt;/code> to &lt;code>v1&lt;/code>. I got tagged as an API reviewer in 2016 based on
those contributions, and was added as an API approver in 2017.&lt;/p>
&lt;p>Today, I help lead the API Governance and code organization subprojects for SIG Architecture, and I
am a tech lead for SIG Auth.&lt;/p>
&lt;p>&lt;strong>FM: And when did you get specifically involved in the API Governance project?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>JL&lt;/strong>: Around 2019.&lt;/p>
&lt;h2 id="goals-and-scope-of-api-governance">Goals and scope of API Governance&lt;/h2>
&lt;p>&lt;strong>FM: How would you describe the main goals and areas of intervention of the subproject?&lt;/strong>&lt;/p>
&lt;p>The surface area includes all the various APIs Kubernetes has, and there are APIs that people do not
always realize are APIs: command-line flags, configuration files, how binaries are run, how they
talk to back-end components like the container runtime, and how they persist data. People often
think of &amp;ldquo;the API&amp;rdquo; as only the &lt;a href="https://kubernetes.io/docs/reference/using-api/">REST API&lt;/a>&amp;hellip; that
is the biggest and most obvious one, and the one with the largest audience, but all of these other
surfaces are also APIs. Their audiences are narrower, so there is more flexibility there, but they
still require consideration.&lt;/p>
&lt;p>The goals are to be stable while still enabling innovation. Stability is easy if you never change
anything, but that contradicts the goal of evolution and growth. So we balance &amp;ldquo;be stable&amp;rdquo; with
&amp;ldquo;allow change&amp;rdquo;.&lt;/p>
&lt;p>&lt;strong>FM: Speaking of changes, in terms of ensuring consistency and quality (which is clearly one of the
reasons this project exists), what are the specific quality gates in the lifecycle of a Kubernetes
change? Does API Governance get involved during the release cycle, prior to it through guidelines,
or somewhere in between? At what points do you ensure the intended role is fulfilled?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>JL&lt;/strong>: We have &lt;a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md">guidelines and
conventions&lt;/a>,
both for APIs in general and for how to change an API. These are living documents that we update as
we encounter new scenarios. They are long and dense, so we also support them with involvement at
either the design stage or the implementation stage.&lt;/p>
&lt;p>Sometimes, due to bandwidth constraints, teams move ahead with design work without feedback from &lt;a href="https://github.com/kubernetes/community/blob/master/sig-architecture/api-review-process.md">API Review&lt;/a>. That’s fine, but it means that when implementation begins, the API review will happen then,
and there may be substantial feedback. So we get involved when a new API is created or an existing
API is changed, either at design or implementation.&lt;/p>
&lt;p>&lt;strong>FM: Is this during the Kubernetes Enhancement Proposal (KEP) process? Since KEPs are mandatory for
enhancements, I assume part of the work intersects with API Governance?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>JL&lt;/strong>: It can. &lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/README.md">KEPs&lt;/a> vary
in how detailed they are. Some include literal API definitions. When they do, we can perform an API
review at the design stage. Then implementation becomes a matter of checking fidelity to the design.&lt;/p>
&lt;p>Getting involved early is ideal. But some KEPs are conceptual and leave details to the
implementation. That’s not wrong; it just means the implementation will be more exploratory. Then
API Review gets involved later, possibly recommending structural changes.&lt;/p>
&lt;p>There’s a trade-off regardless: detailed design upfront versus iterative discovery during
implementation. People and teams work differently, and we’re flexible and happy to consult early or
at implementation time.&lt;/p>
&lt;p>&lt;strong>FM: This reminds me of what Fred Brooks wrote in &amp;ldquo;The Mythical Man-Month&amp;rdquo; about conceptual
integrity being central to product quality&amp;hellip; No matter how you structure the process, there must be
a point where someone looks at what is coming and ensures conceptual integrity. Kubernetes uses APIs
everywhere &amp;ndash; externally and internally &amp;ndash; so API Governance is critical to maintaining that
integrity. How is this captured?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>JL&lt;/strong>: Yes, the conventions document captures patterns we’ve learned over time: what to do in
various situations. We also have automated linters and checks to ensure correctness around patterns
like spec/status semantics. These automated tools help catch issues even when humans miss them.&lt;/p>
&lt;p>As new scenarios arise &amp;ndash; and they do constantly &amp;ndash; we think through how to approach them and fold
the results back into our documentation and tools. Sometimes it takes a few attempts before we
settle on an approach that works well.&lt;/p>
&lt;p>&lt;strong>FM: Exactly. Each new interaction improves the guidelines.&lt;/strong>&lt;/p>
&lt;p>&lt;strong>JL&lt;/strong>: Right. And sometimes the first approach turns out to be wrong. It may take two or three
iterations before we land on something robust.&lt;/p>
&lt;h2 id="the-impact-of-custom-resource-definitions">The impact of Custom Resource Definitions&lt;/h2>
&lt;p>&lt;strong>FM: Is there any particular change, episode, or domain that stands out as especially noteworthy,
complex, or interesting in your experience?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>JL&lt;/strong>: The watershed moment was &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/">Custom Resources&lt;/a>.
Prior to that, every API was handcrafted by us and fully reviewed. There were inconsistencies, but
we understood and controlled every type and field.&lt;/p>
&lt;p>When Custom Resources arrived, anyone could define anything. The first version did not even require
a schema. That made it extremely powerful &amp;ndash; it enabled change immediately &amp;ndash; but it left us playing
catch-up on stability and consistency.&lt;/p>
&lt;p>When Custom Resources graduated to General Availability (GA), schemas became required, but escape
hatches still existed for backward compatibility. Since then, we’ve been working on giving CRD
authors validation capabilities comparable to built-ins. Built-in validation rules for CRDs have
only just reached GA in the last few releases.&lt;/p>
&lt;p>So CRDs opened the &amp;ldquo;anything is possible&amp;rdquo; era. Built-in validation rules are the second major
milestone: bringing consistency back.&lt;/p>
&lt;p>The three major themes have been defining schemas, validating data, and handling pre-existing
invalid data. With ratcheting validation (allowing data to improve without breaking existing
objects), we can now guide CRD authors toward conventions without breaking the world.&lt;/p>
&lt;h2 id="api-governance-in-context">API Governance in context&lt;/h2>
&lt;p>&lt;strong>FM: How does API Governance relate to SIG Architecture and API Machinery?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>JL&lt;/strong>: &lt;a href="https://github.com/kubernetes/apimachinery">API Machinery&lt;/a> provides the actual code and
tools that people build APIs on. They don’t review APIs for storage, networking, scheduling, etc.&lt;/p>
&lt;p>SIG Architecture sets the overall system direction and works with API Machinery to ensure the system
supports that direction. API Governance works with other SIGs building on that foundation to define
conventions and patterns, ensuring consistent use of what API Machinery provides.&lt;/p>
&lt;p>&lt;strong>FM: Thank you. That clarifies the flow. Going back to &lt;a href="https://kubernetes.io/releases/release/">release cycles&lt;/a>: do release phases &amp;ndash; enhancements freeze, code
freeze &amp;ndash; change your workload? Or is API Governance mostly continuous?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>JL&lt;/strong>: We get involved in two places: design and implementation. Design involvement increases
before enhancements freeze; implementation involvement increases before code freeze. However, many
efforts span multiple releases, so there is always some design and implementation happening, even
for work targeting future releases. Between those intense periods, we often have time to work on
long-term design work.&lt;/p>
&lt;p>An anti-pattern we see is teams thinking about a large feature for months and then presenting it
three weeks before enhancements freeze, saying, &amp;ldquo;Here is the design, please review.&amp;rdquo; For big changes
with API impact, it’s much better to involve API Governance early.&lt;/p>
&lt;p>And there are good times in the cycle for this &amp;ndash; between freezes &amp;ndash; when people have bandwidth.
That’s when long-term review work fits best.&lt;/p>
&lt;h2 id="getting-involved">Getting involved&lt;/h2>
&lt;p>&lt;strong>FM: Clearly. Now, regarding team dynamics and new contributors: how can someone get involved in
API Governance? What should they focus on?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>JL&lt;/strong>: It’s usually best to follow a specific change rather than trying to learn everything at
once. Pick a small API change, perhaps one someone else is making or one you want to make, and
observe the full process: design, implementation, review.&lt;/p>
&lt;p>High-bandwidth review &amp;ndash; live discussion over video &amp;ndash; is often very effective. If you’re making or
following a change, ask whether there’s a time to go over the design or PR together. Observing those
discussions is extremely instructive.&lt;/p>
&lt;p>Start with a small change. Then move to a bigger one. Then maybe a new API. That builds
understanding of conventions as they are applied in practice.&lt;/p>
&lt;p>&lt;strong>FM: Excellent. Any final comments, or anything we missed?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>JL&lt;/strong>: Yes&amp;hellip; the reason we care so much about compatibility and stability is for our users. It’s
easy for contributors to see those requirements as painful obstacles preventing cleanup or requiring
tedious work&amp;hellip; but users integrated with our system, and we made a promise to them: we want them to
trust that we won’t break that contract. So even when it requires more work, moves slower, or
involves duplication, we choose stability.&lt;/p>
&lt;p>We are not trying to be obstructive; we are trying to make life good for users.&lt;/p>
&lt;p>A lot of our questions focus on the future: you want to do something now&amp;hellip; how will you evolve it
later without breaking it? We assume we will know more in the future, and we want the design to
leave room for that.&lt;/p>
&lt;p>We also assume we will make mistakes. The question then is: how do we leave ourselves avenues to
improve while keeping compatibility promises?&lt;/p>
&lt;p>&lt;strong>FM: Exactly. Jordan, thank you, I think we’ve covered everything. This has been an insightful view
into the API Governance project and its role in the wider Kubernetes project.&lt;/strong>&lt;/p>
&lt;p>&lt;strong>JL&lt;/strong>: Thank you.&lt;/p></description></item><item><title>Blog: Announcing the Checkpoint/Restore Working Group</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2026/01/21/introducing-checkpoint-restore-wg/</link><pubDate>Wed, 21 Jan 2026 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2026/01/21/introducing-checkpoint-restore-wg/</guid><description>
&lt;p>The community around Kubernetes includes a number of Special Interest Groups (SIGs) and Working Groups (WGs) facilitating discussions on important topics between interested contributors. Today we would like to announce the new &lt;a href="https://github.com/kubernetes/community/tree/master/wg-checkpoint-restore">Kubernetes Checkpoint Restore WG&lt;/a> focusing on the integration of Checkpoint/Restore functionality into Kubernetes.&lt;/p>
&lt;h2 id="motivation-and-use-cases">Motivation and use cases&lt;/h2>
&lt;p>There are several high-level scenarios discussed in the working group:&lt;/p>
&lt;ul>
&lt;li>Optimizing resource utilization for interactive workloads, such as Jupyter notebooks and AI chatbots&lt;/li>
&lt;li>Accelerating startup of applications with long initialization times, including Java applications and &lt;a href="https://doi.org/10.1145/3731599.3767354">LLM inference services&lt;/a>&lt;/li>
&lt;li>Using periodic checkpointing to enable fault-tolerance for long-running workloads, such as distributed model training&lt;/li>
&lt;li>Providing &lt;a href="https://doi.org/10.1007/978-3-032-10507-3_3">interruption-aware scheduling&lt;/a> with transparent checkpoint/restore, allowing lower-priority Pods to be preempted while preserving the runtime state of applications&lt;/li>
&lt;li>Facilitating Pod migration across nodes for load balancing and maintenance, without disrupting workloads.&lt;/li>
&lt;li>Enabling forensic checkpointing to investigate and analyze security incidents such as cyberattacks, data breaches, and unauthorized access.&lt;/li>
&lt;/ul>
&lt;p>Across these scenarios, the goal is to help facilitate discussions of ideas between the Kubernetes community and the growing Checkpoint/Restore in Userspace (CRIU) ecosystem. The CRIU community includes several projects that support these use cases, including:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://github.com/checkpoint-restore/criu">CRIU&lt;/a> - A tool for checkpointing and restoring running applications and containers&lt;/li>
&lt;li>&lt;a href="https://github.com/checkpoint-restore/checkpointctl">checkpointctl&lt;/a> - A tool for in-depth analysis of container checkpoints&lt;/li>
&lt;li>&lt;a href="https://github.com/checkpoint-restore/criu-coordinator">criu-coordinator&lt;/a> - A tool for coordinated checkpoint/restore of distributed applications with CRIU&lt;/li>
&lt;li>&lt;a href="https://github.com/checkpoint-restore/checkpoint-restore-operator">checkpoint-restore-operator&lt;/a> - A Kubernetes operator for managing checkpoints&lt;/li>
&lt;/ul>
&lt;p>More information about the checkpoint/restore integration with Kubernetes is also available &lt;a href="https://criu.org/Kubernetes">here&lt;/a>.&lt;/p>
&lt;h2 id="related-events">Related events&lt;/h2>
&lt;p>Following our presentation about &lt;a href="https://sched.co/1tx7i">transparent checkpointing&lt;/a> at KubeCon EU 2025, we are excited to welcome you to our &lt;a href="https://sched.co/2CW6P">panel discussion&lt;/a> and &lt;a href="https://sched.co/2CW7Z">AI + ML session&lt;/a> at KubeCon + CloudNativeCon Europe 2026.&lt;/p>
&lt;h2 id="connect-with-us">Connect with us&lt;/h2>
&lt;p>If you are interested in contributing to Kubernetes or CRIU, there are several ways to participate:&lt;/p>
&lt;ul>
&lt;li>Join our meeting every second Thursday at 17:00 UTC via the Zoom link in our &lt;a href="https://docs.google.com/document/d/1ZMtHBibXfTw4cQerM4O4DJonzVs3W7Hp2K5ml6pTufs/edit">meeting notes&lt;/a>; recordings of our prior meetings are available &lt;a href="https://www.youtube.com/playlist?list=PL69nYSiGNLP1P7F40IMVL3NsNiIm5AGos">here&lt;/a>.&lt;/li>
&lt;li>Chat with us on the &lt;a href="http://slack.k8s.io/">Kubernetes Slack&lt;/a>: &lt;a href="https://kubernetes.slack.com/messages/wg-checkpoint-restore">#wg-checkpoint-restore&lt;/a>&lt;/li>
&lt;li>Email us at the &lt;a href="https://groups.google.com/a/kubernetes.io/g/wg-checkpoint-restore">wg-checkpoint-restore mailing list&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>Blog: Ingress NGINX Retirement: What You Need to Know</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2025/11/12/ingress-nginx-retirement/</link><pubDate>Wed, 12 Nov 2025 12:00:00 -0500</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2025/11/12/ingress-nginx-retirement/</guid><description>
&lt;p>To prioritize the safety and security of the ecosystem, Kubernetes SIG Network and the Security Response Committee are announcing the upcoming retirement of &lt;a href="https://github.com/kubernetes/ingress-nginx/">Ingress NGINX&lt;/a>. Best-effort maintenance will continue until March 2026. Afterward, there will be no further releases, no bugfixes, and no updates to resolve any security vulnerabilities that may be discovered. &lt;strong>Existing deployments of Ingress NGINX will continue to function and installation artifacts will remain available.&lt;/strong>&lt;/p>
&lt;p>We recommend migrating to one of the many alternatives. Consider &lt;a href="https://gateway-api.sigs.k8s.io/guides/">migrating to Gateway API&lt;/a>, the modern replacement for Ingress. If you must continue using Ingress, many alternative Ingress controllers are &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/">listed in the Kubernetes documentation&lt;/a>. Continue reading for further information about the history and current state of Ingress NGINX, as well as next steps.&lt;/p>
&lt;h2 id="about-ingress-nginx">About Ingress NGINX&lt;/h2>
&lt;p>&lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/">Ingress&lt;/a> is the original user-friendly way to direct network traffic to workloads running on Kubernetes. (&lt;a href="https://kubernetes.io/docs/concepts/services-networking/gateway/">Gateway API&lt;/a> is a newer way to achieve many of the same goals.) In order for an Ingress to work in your cluster, there must be an &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/">Ingress controller&lt;/a> running. There are many Ingress controller choices available, which serve the needs of different users and use cases. Some are cloud-provider specific, while others have more general applicability.&lt;/p>
&lt;p>&lt;a href="https://www.github.com/kubernetes/ingress-nginx">Ingress NGINX&lt;/a> was an Ingress controller, developed early in the history of the Kubernetes project as an example implementation of the API. It became very popular due to its tremendous flexibility, breadth of features, and independence from any particular cloud or infrastructure provider. Since those days, many other Ingress controllers have been created within the Kubernetes project by community groups, and by cloud native vendors. Ingress NGINX has continued to be one of the most popular, deployed as part of many hosted Kubernetes platforms and within innumerable independent users’ clusters.&lt;/p>
&lt;h2 id="history-and-challenges">History and Challenges&lt;/h2>
&lt;p>The breadth and flexibility of Ingress NGINX has caused maintenance challenges. Changing expectations about cloud native software have also added complications. What were once considered helpful options have sometimes come to be considered serious security flaws, such as the ability to add arbitrary NGINX configuration directives via the &amp;ldquo;snippets&amp;rdquo; annotations. Yesterday’s flexibility has become today’s insurmountable technical debt.&lt;/p>
&lt;p>Despite the project’s popularity among users, Ingress NGINX has always struggled with insufficient or barely-sufficient maintainership. For years, the project has had only one or two people doing development work, on their own time, after work hours and on weekends. Last year, the Ingress NGINX maintainers &lt;a href="https://kccncna2024.sched.com/event/1hoxW/securing-the-future-of-ingress-nginx-james-strong-isovalent-marco-ebert-giant-swarm">announced&lt;/a> their plans to wind down Ingress NGINX and develop a replacement controller together with the Gateway API community. Unfortunately, even that announcement failed to generate additional interest in helping maintain Ingress NGINX or develop InGate to replace it. (InGate development never progressed far enough to create a mature replacement; it will also be retired.)&lt;/p>
&lt;h2 id="current-state-and-next-steps">Current State and Next Steps&lt;/h2>
&lt;p>Currently, Ingress NGINX is receiving best-effort maintenance. SIG Network and the Security Response Committee have exhausted our efforts to find additional support to make Ingress NGINX sustainable. To prioritize user safety, we must retire the project.&lt;/p>
&lt;p>In March 2026, Ingress NGINX maintenance will be halted, and the project will be &lt;a href="https://github.com/kubernetes-retired/">retired&lt;/a>. After that time, there will be no further releases, no bugfixes, and no updates to resolve any security vulnerabilities that may be discovered. The GitHub repositories will be made read-only and left available for reference.&lt;/p>
&lt;p>Existing deployments of Ingress NGINX will not be broken. Existing project artifacts such as Helm charts and container images will remain available.&lt;/p>
&lt;p>In most cases, you can check whether you use Ingress NGINX by running &lt;code>kubectl get pods \--all-namespaces \--selector app.kubernetes.io/name=ingress-nginx&lt;/code> with cluster administrator permissions.&lt;/p>
&lt;p>We would like to thank the Ingress NGINX maintainers for their work in creating and maintaining this project–their dedication remains impressive. This Ingress controller has powered billions of requests in datacenters and homelabs all around the world. In a lot of ways, Kubernetes wouldn’t be where it is without Ingress NGINX, and we are grateful for so many years of incredible effort.&lt;/p>
&lt;p>&lt;strong>SIG Network and the Security Response Committee recommend that all Ingress NGINX users begin migration to Gateway API or another Ingress controller immediately.&lt;/strong> Many options are listed in the Kubernetes documentation: &lt;a href="https://gateway-api.sigs.k8s.io/guides/">Gateway API&lt;/a>, &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/">Ingress&lt;/a>. Additional options may be available from vendors you work with.&lt;/p></description></item><item><title>Blog: Announcing the 2025 Steering Committee Election Results</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2025/11/09/steering-committee-results-2025/</link><pubDate>Sun, 09 Nov 2025 15:10:00 -0500</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2025/11/09/steering-committee-results-2025/</guid><description>
&lt;p>The &lt;a href="https://github.com/kubernetes/community/tree/master/elections/steering/2025">2025 Steering Committee Election&lt;/a> is now complete. The Kubernetes Steering Committee consists of 7 seats, 4 of which were up for election in 2025. Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community.&lt;/p>
&lt;p>The Steering Committee oversees the governance of the entire Kubernetes project. With that great power comes great responsibility. You can learn more about the steering committee’s role in their &lt;a href="https://github.com/kubernetes/steering/blob/master/charter.md">charter&lt;/a>.&lt;/p>
&lt;p>Thank you to everyone who voted in the election; your participation helps support the community’s continued health and success.&lt;/p>
&lt;h2 id="results">Results&lt;/h2>
&lt;p>Congratulations to the elected committee members whose two year terms begin immediately (listed in alphabetical order by GitHub handle):&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Kat Cosgrove (&lt;a href="https://github.com/katcosgrove">@katcosgrove&lt;/a>), Minimus&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Paco Xu (&lt;a href="https://github.com/pacoxu">@pacoxu&lt;/a>), DaoCloud&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Rita Zhang (&lt;a href="https://github.com/ritazh">@ritazh&lt;/a>), Microsoft&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Maciej Szulik (&lt;a href="https://github.com/soltysh">@soltysh&lt;/a>), Defense Unicorns&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>They join continuing members:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Antonio Ojea (&lt;a href="https://github.com/aojea">@aojea&lt;/a>), Google&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Benjamin Elder (&lt;a href="https://github.com/BenTheElder">@BenTheElder&lt;/a>), Google&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Sascha Grunert (&lt;a href="https://github.com/saschagrunert">@saschagrunert&lt;/a>), Red Hat&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>Maciej Szulik and Paco Xu are returning Steering Committee Members.&lt;/p>
&lt;h2 id="big-thanks">Big thanks!&lt;/h2>
&lt;p>Thank you and congratulations on a successful election to this round’s election officers:&lt;/p>
&lt;ul>
&lt;li>Christoph Blecker (&lt;a href="https://github.com/cblecker">@cblecker&lt;/a>)&lt;/li>
&lt;li>Nina Polshakova (&lt;a href="https://github.com/npolshakova">@npolshakova&lt;/a>)&lt;/li>
&lt;li>Sreeram Venkitesh (&lt;a href="https://github.com/sreeram-venkitesh">@sreeram-venkitesh&lt;/a>)&lt;/li>
&lt;/ul>
&lt;p>Thanks to the Emeritus Steering Committee Members. Your service is appreciated by the community:&lt;/p>
&lt;ul>
&lt;li>Stephen Augustus (&lt;a href="https://github.com/justaugustus">@justaugustus&lt;/a>), Bloomberg&lt;/li>
&lt;li>Patrick Ohly (&lt;a href="https://github.com/pohly">@pohly&lt;/a>), Intel&lt;/li>
&lt;/ul>
&lt;p>And thank you to all the candidates who came forward to run for election.&lt;/p>
&lt;h2 id="get-involved-with-the-steering-committee">Get involved with the Steering Committee&lt;/h2>
&lt;p>This governing body, like all of Kubernetes, is open to all. You can follow along with Steering Committee &lt;a href="https://bit.ly/k8s-steering-wd">meeting notes&lt;/a> and weigh in by filing an issue or creating a PR against their &lt;a href="https://github.com/kubernetes/steering">repo&lt;/a>. They have an open meeting on &lt;a href="https://github.com/kubernetes/steering">the first Wednesday at 8am PT of every month&lt;/a>. They can also be contacted at their public mailing list &lt;a href="mailto:steering@kubernetes.io">steering@kubernetes.io&lt;/a>.&lt;/p>
&lt;p>You can see what the Steering Committee meetings are all about by watching past meetings on the &lt;a href="https://www.youtube.com/playlist?list=PL69nYSiGNLP1yP1B_nd9-drjoxp0Q14qM">YouTube Playlist&lt;/a>.&lt;/p>
&lt;hr>
&lt;p>&lt;em>This post was adapted from one written by the &lt;a href="https://github.com/kubernetes/community/tree/master/communication/contributor-comms">Contributor Comms Subproject&lt;/a>. If you want to write stories about the Kubernetes community, learn more about us.&lt;/em>&lt;/p>
&lt;p>&lt;em>This article was revised in November 2025 to update the information about when the steering committee meets.&lt;/em>&lt;/p></description></item><item><title>Blog: Spotlight on Policy Working Group</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2025/10/18/wg-policy-spotlight-2025/</link><pubDate>Sat, 18 Oct 2025 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2025/10/18/wg-policy-spotlight-2025/</guid><description>
&lt;p>&lt;em>(Note: The Policy Working Group has completed its mission and is no longer active. This article reflects its work, accomplishments, and insights into how a working group operates.)&lt;/em>&lt;/p>
&lt;p>In the complex world of Kubernetes, policies play a crucial role in managing and securing clusters. But have you ever wondered how these policies are developed, implemented, and standardized across the Kubernetes ecosystem? To answer that, let&amp;rsquo;s take a look back at the work of the Policy Working Group.&lt;/p>
&lt;p>The Policy Working Group was dedicated to a critical mission: providing an overall architecture that encompasses both current policy-related implementations and future policy proposals in Kubernetes. Their goal was both ambitious and essential: to develop a universal policy architecture that benefits developers and end-users alike.&lt;/p>
&lt;p>Through collaborative methods, this working group strove to bring clarity and consistency to the often complex world of Kubernetes policies. By focusing on both existing implementations and future proposals, they ensured that the policy landscape in Kubernetes remains coherent and accessible as the technology evolves.&lt;/p>
&lt;p>This blog post dives deeper into the work of the Policy Working Group, guided by insights from its former co-chairs:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://twitter.com/JimBugwadia">Jim Bugwadia&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://twitter.com/poonam_lamba">Poonam Lamba&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://twitter.com/sudermanjr">Andy Suderman&lt;/a>&lt;/li>
&lt;/ul>
&lt;p>&lt;em>Interviewed by &lt;a href="https://twitter.com/arujjval">Arujjwal Negi&lt;/a>.&lt;/em>&lt;/p>
&lt;p>These co-chairs explained what the Policy Working Group was all about.&lt;/p>
&lt;h2 id="introduction">Introduction&lt;/h2>
&lt;p>&lt;strong>Hello, thank you for the time! Let’s start with some introductions, could you tell us a bit about yourself, your role, and how you got involved in Kubernetes?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Jim Bugwadia&lt;/strong>: My name is Jim Bugwadia, and I am a co-founder and the CEO at Nirmata which provides solutions that automate security and compliance for cloud-native workloads. At Nirmata, we have been working with Kubernetes since it started in 2014. We initially built a Kubernetes policy engine in our commercial platform and later donated it to CNCF as the Kyverno project. I joined the CNCF Kubernetes Policy Working Group to help build and standardize various aspects of policy management for Kubernetes and later became a co-chair.&lt;/p>
&lt;p>&lt;strong>Andy Suderman&lt;/strong>: My name is Andy Suderman and I am the CTO of Fairwinds, a managed Kubernetes-as-a-Service provider. I began working with Kubernetes in 2016 building a web conferencing platform. I am an author and/or maintainer of several Kubernetes-related open-source projects such as Goldilocks, Pluto, and Polaris. Polaris is a JSON-schema-based policy engine, which started Fairwinds&amp;rsquo; journey into the policy space and my involvement in the Policy Working Group.&lt;/p>
&lt;p>&lt;strong>Poonam Lamba&lt;/strong>: My name is Poonam Lamba, and I currently work as a Product Manager for Google Kubernetes Engine (GKE) at Google. My journey with Kubernetes began back in 2017 when I was building an SRE platform for a large enterprise, using a private cloud built on Kubernetes. Intrigued by its potential to revolutionize the way we deployed and managed applications at the time, I dove headfirst into learning everything I could about it. Since then, I&amp;rsquo;ve had the opportunity to build the policy and compliance products for GKE. I lead and contribute to GKE CIS benchmarks. I am involved with the Gatekeeper project as well as I have contributed to Policy-WG for over 2 years and served as a co-chair for the group.&lt;/p>
&lt;p>&lt;em>Responses to the following questions represent an amalgamation of insights from the former co-chairs.&lt;/em>&lt;/p>
&lt;h2 id="about-working-groups">About Working Groups&lt;/h2>
&lt;p>&lt;strong>One thing even I am not aware of is the difference between a working group and a SIG. Can you help us understand what a working group is and how it is different from a SIG?&lt;/strong>&lt;/p>
&lt;p>Unlike SIGs, working groups are temporary and focused on tackling specific, cross-cutting issues or projects that may involve multiple SIGs. Their lifespan is defined, and they disband once they&amp;rsquo;ve achieved their objective. Generally, working groups don&amp;rsquo;t own code or have long-term responsibility for managing a particular area of the Kubernetes project.&lt;/p>
&lt;p>(To know more about SIGs, visit the &lt;a href="https://github.com/kubernetes/community/blob/master/sig-list.md">list of Special Interest Groups&lt;/a>)&lt;/p>
&lt;p>&lt;strong>You mentioned that Working Groups involve multiple SIGS. What SIGS was the Policy WG closely involved with, and how did you coordinate with them?&lt;/strong>&lt;/p>
&lt;p>The group collaborated closely with Kubernetes SIG Auth throughout our existence, and more recently, the group also worked with SIG Security since its formation. Our collaboration occurred in a few ways. We provided periodic updates during the SIG meetings to keep them informed of our progress and activities. Additionally, we utilize other community forums to maintain open lines of communication and ensured our work aligned with the broader Kubernetes ecosystem. This collaborative approach helped the group stay coordinated with related efforts across the Kubernetes community.&lt;/p>
&lt;h2 id="policy-wg">Policy WG&lt;/h2>
&lt;p>&lt;strong>Why was the Policy Working Group created?&lt;/strong>&lt;/p>
&lt;p>To enable a broad set of use cases, we recognize that Kubernetes is powered by a highly declarative, fine-grained, and extensible configuration management system. We&amp;rsquo;ve observed that a Kubernetes configuration manifest may have different portions that are important to various stakeholders. For example, some parts may be crucial for developers, while others might be of particular interest to security teams or address operational concerns. Given this complexity, we believe that policies governing the usage of these intricate configurations are essential for success with Kubernetes.&lt;/p>
&lt;p>Our Policy Working Group was created specifically to research the standardization of policy definitions and related artifacts. We saw a need to bring consistency and clarity to how policies are defined and implemented across the Kubernetes ecosystem, given the diverse requirements and stakeholders involved in Kubernetes deployments.&lt;/p>
&lt;p>&lt;strong>Can you give me an idea of the work you did in the group?&lt;/strong>&lt;/p>
&lt;p>We worked on several Kubernetes policy-related projects. Our initiatives included:&lt;/p>
&lt;ul>
&lt;li>We worked on a Kubernetes Enhancement Proposal (KEP) for the Kubernetes Policy Reports API. This aims to standardize how policy reports are generated and consumed within the Kubernetes ecosystem.&lt;/li>
&lt;li>We conducted a CNCF survey to better understand policy usage in the Kubernetes space. This helped gauge the practices and needs across the community at the time.&lt;/li>
&lt;li>We wrote a paper that will guide users in achieving PCI-DSS compliance for containers. This is intended to help organizations meet important security standards in their Kubernetes environments.&lt;/li>
&lt;li>We also worked on a paper highlighting how shifting security down can benefit organizations. This focuses on the advantages of implementing security measures earlier in the development and deployment process.&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Can you tell us what were the main objectives of the Policy Working Group and some of your key accomplishments?&lt;/strong>&lt;/p>
&lt;p>The charter of the Policy WG was to help standardize policy management for Kubernetes and educate the community on best practices.&lt;/p>
&lt;p>To accomplish this we updated the Kubernetes documentation (&lt;a href="https://kubernetes.io/docs/concepts/policy">Policies | Kubernetes&lt;/a>), produced several whitepapers (&lt;a href="https://github.com/kubernetes/sig-security/blob/main/sig-security-docs/papers/policy/CNCF_Kubernetes_Policy_Management_WhitePaper_v1.pdf">Kubernetes Policy Management&lt;/a>, &lt;a href="https://github.com/kubernetes/sig-security/blob/main/sig-security-docs/papers/policy_grc/Kubernetes_Policy_WG_Paper_v1_101123.pdf">Kubernetes GRC&lt;/a>), and created the Policy Reports API (&lt;a href="https://github.com/kubernetes-retired/wg-policy-prototypes/blob/master/policy-report/docs/api-docs.md">API reference&lt;/a>) which standardizes reporting across various tools. Several popular tools such as Falco, Trivy, Kyverno, kube-bench, and others support the Policy Report API. A major milestone for the Policy WG was promoting the Policy Reports API to a SIG-level API or finding it a stable home.&lt;/p>
&lt;p>Beyond that, as &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/">ValidatingAdmissionPolicy&lt;/a> and &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/mutating-admission-policy/">MutatingAdmissionPolicy&lt;/a> approached GA in Kubernetes, a key goal of the WG was to guide and educate the community on the tradeoffs and appropriate usage patterns for these built-in API objects and other CNCF policy management solutions like OPA/Gatekeeper and Kyverno.&lt;/p>
&lt;h2 id="challenges">Challenges&lt;/h2>
&lt;p>&lt;strong>What were some of the major challenges that the Policy Working Group worked on?&lt;/strong>&lt;/p>
&lt;p>During our work in the Policy Working Group, we encountered several challenges:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>One of the main issues we faced was finding time to consistently contribute. Given that many of us have other professional commitments, it can be difficult to dedicate regular time to the working group&amp;rsquo;s initiatives.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Another challenge we experienced was related to our consensus-driven model. While this approach ensures that all voices are heard, it can sometimes lead to slower decision-making processes. We valued thorough discussion and agreement, but this can occasionally delay progress on our projects.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>We&amp;rsquo;ve also encountered occasional differences of opinion among group members. These situations require careful navigation to ensure that we maintain a collaborative and productive environment while addressing diverse viewpoints.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Lastly, we&amp;rsquo;ve noticed that newcomers to the group may find it difficult to contribute effectively without consistent attendance at our meetings. The complex nature of our work often requires ongoing context, which can be challenging for those who aren&amp;rsquo;t able to participate regularly.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Can you tell me more about those challenges? How did you discover each one? What has the impact been? What were some strategies you used to address them?&lt;/strong>&lt;/p>
&lt;p>There are no easy answers, but having more contributors and maintainers greatly helps! Overall the CNCF community is great to work with and is very welcoming to beginners. So, if folks out there are hesitating to get involved, I highly encourage them to attend a WG or SIG meeting and just listen in.&lt;/p>
&lt;p>It often takes a few meetings to fully understand the discussions, so don&amp;rsquo;t feel discouraged if you don&amp;rsquo;t grasp everything right away. We made a point to emphasize this and encouraged new members to review documentation as a starting point for getting involved.&lt;/p>
&lt;p>Additionally, differences of opinion were valued and encouraged within the Policy-WG. We adhered to the CNCF core values and resolve disagreements by maintaining respect for one another. We also strove to timebox our decisions and assign clear responsibilities to keep things moving forward.&lt;/p>
&lt;hr>
&lt;p>This is where our discussion about the Policy Working Group ends. The working group, and especially the people who took part in this article, hope this gave you some insights into the group&amp;rsquo;s aims and workings. You can get more info about Working Groups &lt;a href="https://github.com/kubernetes/community/blob/master/committee-steering/governance/wg-governance.md">here&lt;/a>.&lt;/p></description></item><item><title>Blog: Spotlight on the Kubernetes Steering Committee</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2025/09/22/k8s-steering-spotlight-2025/</link><pubDate>Mon, 22 Sep 2025 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2025/09/22/k8s-steering-spotlight-2025/</guid><description>
&lt;p>&lt;em>This interview was conducted in August 2024, and due to the dynamic nature of the Steering
Committee membership and election process it might not represent the actual composition accurately.
The topics covered are, however, overwhelmingly relevant to understand its scope of work. As we
approach the Steering Committee elections, it provides useful insights into the workings of the
Committee.&lt;/em>&lt;/p>
&lt;p>The &lt;a href="https://github.com/kubernetes/steering">Kubernetes Steering Committee&lt;/a> is the backbone of the
Kubernetes project, ensuring that its vibrant community and governance structures operate smoothly
and effectively. While the technical brilliance of Kubernetes is often spotlighted through its
&lt;a href="https://github.com/kubernetes/community">Special Interest Groups (SIGs) and Working Groups (WGs)&lt;/a>,
the unsung heroes quietly steering the ship are the members of the Steering Committee. They tackle
complex organizational challenges, empower contributors, and foster the thriving open source
ecosystem that Kubernetes is celebrated for.&lt;/p>
&lt;p>But what does it really take to lead one of the world’s largest open source communities? What are
the hidden challenges, and what drives these individuals to dedicate their time and effort to such
an impactful role? In this exclusive conversation, we sit down with current Steering Committee (SC)
members &amp;mdash; Ben, Nabarun, Paco, Patrick, and Maciej &amp;mdash; to uncover the rewarding, and sometimes
demanding, realities of steering Kubernetes. From their personal journeys and motivations to the
committee’s vital responsibilities and future outlook, this Spotlight offers a rare
behind-the-scenes glimpse into the people who keep Kubernetes on course.&lt;/p>
&lt;h2 id="introductions">Introductions&lt;/h2>
&lt;p>&lt;strong>Sandipan: Can you tell us a little bit about yourself?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Ben&lt;/strong>: Hi, I’m &lt;a href="https://www.linkedin.com/in/bentheelder/">Benjamin Elder&lt;/a>, also known as
BenTheElder. I started in Kubernetes as a Google Summer of Code student in 2015 and have been
working at Google in the space since 2017. I have contributed a lot to many areas but especially
build, CI, test tooling, etc. My favorite project so far was building
&lt;a href="https://kind.sigs.k8s.io/">KIND&lt;/a>. I have been on the release team, a chair of &lt;a href="https://github.com/kubernetes/community/blob/master/sig-testing/README.md">SIG
Testing&lt;/a>, and currently a
tech lead of SIG Testing and &lt;a href="https://github.com/kubernetes/community/blob/master/sig-k8s-infra/README.md">SIG K8s Infra&lt;/a>.&lt;/p>
&lt;p>&lt;strong>Nabarun&lt;/strong>: Hi, I am &lt;a href="https://www.linkedin.com/in/palnabarun/">Nabarun&lt;/a> from India. I have been
working on Kubernetes since 2019. I have been contributing across multiple areas in Kubernetes: &lt;a href="https://github.com/kubernetes/community/blob/master/sig-k8s-infra/README.md">SIG
ContribEx&lt;/a> (where I am
also a chair), &lt;a href="https://github.com/kubernetes/community/blob/master/sig-k8s-infra/README.md">API
Machinery&lt;/a>,
&lt;a href="https://github.com/kubernetes/community/blob/master/sig-architecture/README.md">Architecture&lt;/a>, and
&lt;a href="https://github.com/kubernetes/community/blob/master/sig-release/README.md">SIG Release&lt;/a>, where I
contributed to several releases including being the Release Team Lead of &lt;a href="https://kubernetes.io/blog/2021/04/08/kubernetes-1-21-release-announcement/">Kubernetes 1.21&lt;/a>.&lt;/p>
&lt;p>&lt;strong>Paco&lt;/strong>: I am &lt;a href="https://www.linkedin.com/in/pacoxu2020/">Paco&lt;/a> from China. I worked as an open
source team lead in DaoCloud, Shanghai. In the community, I participate mainly in
&lt;a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/">kubeadm&lt;/a>, &lt;a href="https://github.com/kubernetes/community/blob/master/sig-node/README.md">SIG
Node&lt;/a> and &lt;a href="https://github.com/kubernetes/community/blob/master/sig-testing/README.md">SIG
Testing&lt;/a>. Besides, I
helped in KCD China and was co-chair of the recent &lt;a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-open-source-summit-ai-dev-china/">KubeCon+CloudNativeCon China 2024&lt;/a> in Hong Kong.&lt;/p>
&lt;p>&lt;strong>Patrick&lt;/strong>: Hello! I’m &lt;a href="https://www.linkedin.com/in/patrickohly/">Patrick&lt;/a>. I’ve contributed to Kubernetes since 2018. I started in &lt;a href="https://github.com/kubernetes/community/blob/master/sig-storage/README.md">SIG Storage&lt;/a> and then got involved in more and more areas. Nowadays, I am a SIG Testing tech lead, logging infrastructure maintainer, organizer of the &lt;a href="https://github.com/kubernetes/community/tree/master/wg-structured-logging">Structured Logging&lt;/a> and &lt;a href="https://github.com/kubernetes/community/tree/master/wg-device-management">Device Management&lt;/a> working groups, contributor in &lt;a href="https://github.com/kubernetes/community/blob/master/sig-scheduling/README.md">SIG Scheduling&lt;/a>, and of course member of the Steering Committee. My main focus area currently is &lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/">Dynamic Resource Allocation (DRA)&lt;/a>, a new API for accelerators.&lt;/p>
&lt;p>&lt;strong>Maciej&lt;/strong>: Hey, my name is &lt;a href="https://www.linkedin.com/in/maciejszulik/">Maciej&lt;/a> and I&amp;rsquo;ve been working on Kubernetes since late 2014 in various areas, including controllers, apiserver and kubectl. Aside from being part of the Steering Committee, I’m also helping guide &lt;a href="https://github.com/kubernetes/community/blob/master/sig-cli/README.md">SIG CLI&lt;/a>, &lt;a href="https://github.com/kubernetes/community/blob/master/sig-apps/README.md">SIG Apps&lt;/a> and &lt;a href="https://github.com/kubernetes/community/blob/master/wg-batch/README.md">WG Batch&lt;/a>.&lt;/p>
&lt;h2 id="about-the-steering-committee">About the Steering Committee&lt;/h2>
&lt;p>&lt;strong>Sandipan: What does Steering do?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Ben:&lt;/strong> The charter is the definitive answer, but I see Steering as helping resolve
Kubernetes-organization-level &amp;ldquo;people problems&amp;rdquo; (as opposed to technical problems), such as
clarifying project governance and liaising with the Cloud Native Computing Foundation (for example,
to request additional resources and support) and other CNCF projects.&lt;/p>
&lt;p>&lt;strong>Maciej&lt;/strong>: Our
&lt;a href="https://github.com/kubernetes/steering/blob/main/charter.md#direct-responsibilities-of-the-steering-committee">charter&lt;/a>
nicely describes all the responsibilities. In short, we make sure the project runs smoothly by
supporting our maintainers and contributors in their daily tasks.&lt;/p>
&lt;p>&lt;strong>Patrick&lt;/strong>: Ideally, we don’t do anything 😀 All of the day-to-day business has been delegated to
SIGs and WGs. Steering gets involved when something pops up where it isn’t obvious who should handle
it or when conflicts need to be resolved.&lt;/p>
&lt;p>**Sandipan: And how is Steering different from SIGs?&lt;/p>
&lt;p>&lt;strong>Ben&lt;/strong>: From a governance perspective: Steering delegates all of the ownership of subprojects to
the SIGs and/or committees (&lt;em>Security Response&lt;/em>, &lt;em>Code Of Conduct&lt;/em>, etc.). They’re very different.
The SIGs own pieces of the project, and Steering handles some of the overarching people and policy
issues. You’ll find all of the software development, releasing, communications and documentation
work happening in the SIGs and committees.&lt;/p>
&lt;p>&lt;strong>Maciej&lt;/strong>: SIGs or WGs are primarily concerned with the technical direction of a particular area in
Kubernetes. Steering, on the other hand, is primarily concerned with ensuring all the SIGs, WGs, and
most importantly maintainers have everything they need to run the project smoothly. This includes
anything from ensuring financing of our CI systems, through governance structures and policies all
the way to supporting individual maintainers in various inquiries.&lt;/p>
&lt;p>**Sandipan: You&amp;rsquo;ve mentioned projects, could you give us an example of a project Steering has worked
on recently?&lt;/p>
&lt;p>&lt;strong>Ben&lt;/strong>: We’ve been discussing the logistics to sync a better definition of the project’s official
maintainers to the CNCF, which are used, for example, to vote for the &lt;a href="https://www.cncf.io/people/technical-oversight-committee/">Technical Oversight
Committee&lt;/a> (TOC). Currently that list is
the Steering Committee, with SIG Contributor Experience and Infra + Release leads having access to
the CNCF service desk. This isn’t well standardized yet across CNCF projects but I think it’s
important.&lt;/p>
&lt;p>&lt;strong>Maciej&lt;/strong>: For the past year I’ve been sitting on the SC, I believe the majority of tasks we’ve
been involved in were around providing letters supporting visa applications. Also, like every year,
we’ve been helping all the SIGs and WGs with their annual reports.&lt;/p>
&lt;p>&lt;strong>Patrick&lt;/strong>: Apparently it has been a quiet year since Maciej and I joined the Steering Committee at
the end of 2023. That’s exactly how it should be.&lt;/p>
&lt;p>&lt;strong>Sandipan: Do you have any examples of projects that came to Steering, which you then redirected to
SIGs?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Ben&lt;/strong>: We often get requests for test/build related resources that we redirect to SIG K8s Infra +
SIG Testing, or more specifically about releasing for subprojects that we redirect to SIG K8s Infra
/ SIG Release.&lt;/p>
&lt;h2 id="the-road-to-the-steering-committee">The road to the Steering Committee&lt;/h2>
&lt;p>&lt;strong>Sandipan: What motivated you to be part of the Steering Committee? What has your journey been
like?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Ben&lt;/strong>: I had a few people reach out and prompt me to run, but I was motivated by my passion for
this community and the project. I think we have something really special going here and I care
deeply about the ongoing success. I’ve been involved in this space my whole career and while there’s
always rough edges, this community has been really supportive and I hope we can keep it that way.&lt;/p>
&lt;p>&lt;strong>Paco&lt;/strong>: After my journey to the &lt;a href="https://www.kubernetes.dev/events/2023/kcseu/">Kubernetes Contributor Summit EU
2023&lt;/a>, I met and chatted with many maintainers and
members there, and attended the steering AMA there for the first time as there hadn’t been a
contributor summit in China since 2019, and I started to connect with contributors in China to make
it later the year. Through conversations at KCS EU and with local contributors, I realized that it
is quite important to make it easy to start a contributor journey for APAC contributors and want to
attract more contributors to the community. Then, I was elected just after the &lt;a href="https://www.kubernetes.dev/events/2023/kcscn/">KCS CN 2023&lt;/a>.&lt;/p>
&lt;p>&lt;strong>Patrick&lt;/strong>: I had done a lot of technical work, of which some affects and (hopefully) benefits all
contributors to Kubernetes (linting and testing) and users (better log output). I saw joining the
Steering Committee as an opportunity to help also with the organizational aspects of running a big
open source project.&lt;/p>
&lt;p>&lt;strong>Maciej&lt;/strong>: I’ve been going through the idea of running for SC for a while now. My biggest drive was
conversations with various members of our community. Eventually last year, I decided to follow their
advice, and got elected :-)&lt;/p>
&lt;p>&lt;strong>Sandipan: What is your favorite part of being part of Steering?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Ben&lt;/strong>: When we get to help contributors directly. For example, sometimes extensive contributors
reach out for an official letter from Steering explaining their contribution and its value for visa
support. When we get to just purely help out Kubernetes contributors, that’s my favorite part.&lt;/p>
&lt;p>&lt;strong>Patrick&lt;/strong>: It’s a good place to learn more about how the project is actually run, directly from
the other great people who are doing it.&lt;/p>
&lt;p>&lt;strong>Maciej&lt;/strong>: The same thing as with the project &amp;mdash; it’s always the people that surround us, that give
us opportunities to collaborate and create something interesting and exciting.&lt;/p>
&lt;p>&lt;strong>Sandipan: What do you think is most challenging about being part of Steering?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Ben&lt;/strong>: I think we’ve all spent a lot of time grappling with the sustainability issues in the
project and not having a single great answer to solve them. A lot of people are working on these
problems but we have limited time and resources. We’ve officially delegated most of this (for
example, to SIGs Contributor Experience and K8s Infra), but I think we all still consider it very
important and deserving of more time and energy, yet we only have so much and the answers are not
obvious. The balancing act is hard.&lt;/p>
&lt;p>&lt;strong>Paco&lt;/strong>: Sustainability of contributors and maintainers is one of the most challenging aspects to
me. I am constantly advocating for OSS users and employers to join the community. Community is a
place that developers can learn from each other, discuss issues they encounter, and share their
experience or solutions. Ensuring everyone in the community to feel supported and valued is crucial
for the long-term health of the project.&lt;/p>
&lt;p>&lt;strong>Patrick&lt;/strong>: There is documentation about how things are done, but it’s not exhaustive. There are
parts which are intentionally not documented, perhaps because they cannot be public, change too
often, or simply need to be handled on a case-by-case basis. Luckily we have overlapping terms, so
there is an opportunity to learn from more experienced members. We also have a list of former
members and they are happy to respond to questions if needed.&lt;/p>
&lt;p>&lt;strong>Maciej&lt;/strong>: The unknown unknowns :-) After I got elected to SC I’ve tried to talk to various folks
from current and past SC. The biggest challenge that came from all those discussions is that no
matter how hard you try and what you prepare for, there will always be something new that none of
the previous SC had to deal with, so far.&lt;/p>
&lt;p>&lt;strong>Sandipan: For folks who might want to run for Steering in the future, what are the most important
things you think they should know?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Ben&lt;/strong>: A lot of what Steering does is &amp;ldquo;interrupt driven&amp;rdquo;&amp;hellip; something comes up and needs resolution
– Make sure you’re committed and prepared to set aside the time. Otherwise I hope you think calmly
about issues and listen to our community with empathy.&lt;/p>
&lt;p>&lt;strong>Paco&lt;/strong>: This is a quote from the survey to all previous SCs: we should make sure that &amp;ldquo;everyone’s
voice was heard and respected&amp;rdquo; from Clayton. For the community, the most important part is about the
people here.&lt;/p>
&lt;p>&lt;strong>Maciej&lt;/strong>: Once you decide to run and get elected, make sure to reserve a constant time per week
for your Steering duties. There will be times when nothing will need to happen, and others when you
will overflow so reserving that timeframe will ensure consistency in you being a Steering member.&lt;/p>
&lt;h2 id="conclusion">Conclusion&lt;/h2>
&lt;p>Behind every successful open source project is a group of dedicated people who ensure things run
smoothly, and the Kubernetes Steering Committee does just that. They work quietly but effectively,
tackling challenges, supporting contributors, and ensuring the community remains inclusive and
vibrant.&lt;/p>
&lt;p>What makes them stand out is their focus on people &amp;mdash; empowering contributors, resolving governance
issues, and creating an environment where innovation can thrive. It’s not always easy, but as
they’ve shared, it’s deeply rewarding.&lt;/p>
&lt;p>Whether you’re a long-time contributor or thinking about getting involved, the Kubernetes community
is open to you. At its heart, Kubernetes is about more than just technology &amp;mdash; it’s about the people
who make it all happen. There’s always room for one more voice to help shape the future.&lt;/p></description></item><item><title>Blog: Post-Quantum Cryptography in Kubernetes</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2025/07/18/pqc-in-k8s/</link><pubDate>Fri, 18 Jul 2025 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2025/07/18/pqc-in-k8s/</guid><description>
&lt;p>The world of cryptography is on the cusp of a major shift with the advent of
quantum computing. While powerful quantum computers are still largely
theoretical for many applications, their potential to break current
cryptographic standards is a serious concern, especially for long-lived
systems. This is where &lt;em>Post-Quantum Cryptography&lt;/em> (PQC) comes in. In this
article, I'll dive into what PQC means for TLS and, more specifically, for the
Kubernetes ecosystem. I&amp;rsquo;ll explain what the (suprising) state of PQC in
Kubernetes is and what the implications are for current and future clusters.&lt;/p>
&lt;h2 id="what-is-post-quantum-cryptography">What is Post-Quantum Cryptography&lt;/h2>
&lt;p>Post-Quantum Cryptography refers to cryptographic algorithms that are thought to
be secure against attacks by both classical and quantum computers. The primary
concern is that quantum computers, using algorithms like &lt;a href="https://en.wikipedia.org/wiki/Shor%27s_algorithm">Shor's Algorithm&lt;/a>,
could efficiently break widely used public-key cryptosystems such as RSA and
Elliptic Curve Cryptography (ECC), which underpin much of today's secure
communication, including TLS. The industry is actively working on standardizing
and adopting PQC algorithms. One of the first to be standardized by &lt;a href="https://www.nist.gov/">NIST&lt;/a> is
the Module-Lattice Key Encapsulation Mechanism (&lt;code>ML-KEM&lt;/code>), formerly known as
Kyber, and now standardized as &lt;a href="https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.203.pdf">FIPS-203&lt;/a> (PDF download).&lt;/p>
&lt;p>It is difficult to predict when quantum computers will be able to break
classical algorithms. However, it is clear that we need to start migrating to
PQC algorithms now, as the next section shows. To get a feeling for the
predicted timeline we can look at a &lt;a href="https://nvlpubs.nist.gov/nistpubs/ir/2024/NIST.IR.8547.ipd.pdf">NIST report&lt;/a> covering the transition to
post-quantum cryptography standards. It declares that system with classical
crypto should be deprecated after 2030 and disallowed after 2035.&lt;/p>
&lt;h2 id="timelines">Key exchange vs. digital signatures: different needs, different timelines&lt;/h2>
&lt;p>In TLS, there are two main cryptographic operations we need to secure:&lt;/p>
&lt;p>&lt;strong>Key Exchange&lt;/strong>: This is how the client and server agree on a shared secret to
encrypt their communication. If an attacker records encrypted traffic today,
they could decrypt it in the future, if they gain access to a quantum computer
capable of breaking the key exchange. This makes migrating KEMs to PQC an
immediate priority.&lt;/p>
&lt;p>&lt;strong>Digital Signatures&lt;/strong>: These are primarily used to authenticate the server (and
sometimes the client) via certificates. The authenticity of a server is
verified at the time of connection. While important, the risk of an attack
today is much lower, because the decision of trusting a server cannot be abused
after the fact. Additionally, current PQC signature schemes often come with
significant computational overhead and larger key/signature sizes compared to
their classical counterparts.&lt;/p>
&lt;p>Another significant hurdle in the migration to PQ certificates is the upgrade
of root certificates. These certificates have long validity periods and are
installed in many devices and operating systems as trust anchors.&lt;/p>
&lt;p>Given these differences, the focus for immediate PQC adoption in TLS has been
on hybrid key exchange mechanisms. These combine a classical algorithm (such as
Elliptic Curve Diffie-Hellman Ephemeral (ECDHE)) with a PQC algorithm (such as
&lt;code>ML-KEM&lt;/code>). The resulting shared secret is secure as long as at least one of the
component algorithms remains unbroken. The &lt;code>X25519MLKEM768&lt;/code> hybrid scheme is the
most widely supported one.&lt;/p>
&lt;h2 id="state-of-kems">State of PQC key exchange mechanisms (KEMs) today&lt;/h2>
&lt;p>Support for PQC KEMs is rapidly improving across the ecosystem.&lt;/p>
&lt;p>&lt;strong>Go&lt;/strong>: The Go standard library's &lt;code>crypto/tls&lt;/code> package introduced support for
&lt;code>X25519MLKEM768&lt;/code> in version 1.24 (released February 2025). Crucially, it's
enabled by default when there is no explicit configuration, i.e.,
&lt;code>Config.CurvePreferences&lt;/code> is &lt;code>nil&lt;/code>.&lt;/p>
&lt;p>&lt;strong>Browsers &amp;amp; OpenSSL&lt;/strong>: Major browsers like Chrome (version 131, November 2024)
and Firefox (version 135, February 2025), as well as OpenSSL (version 3.5.0,
April 2025), have also added support for the &lt;code>ML-KEM&lt;/code> based hybrid scheme.&lt;/p>
&lt;p>Apple is also &lt;a href="https://support.apple.com/en-lb/122756">rolling out support&lt;/a> for &lt;code>X25519MLKEM768&lt;/code> in version
26 of their operating systems. Given the proliferation of Apple devices, this
will have a significant impact on the global PQC adoption.&lt;/p>
&lt;p>For a more detailed overview of the state of PQC in the wider industry,
see &lt;a href="https://blog.cloudflare.com/pq-2024/">this blog post by Cloudflare&lt;/a>.&lt;/p>
&lt;h2 id="post-quantum-kems-in-kubernetes-an-unexpected-arrival">Post-quantum KEMs in Kubernetes: an unexpected arrival&lt;/h2>
&lt;p>So, what does this mean for Kubernetes? Kubernetes components, including the
API server and kubelet, are built with Go.&lt;/p>
&lt;p>As of Kubernetes v1.33, released in April 2025, the project uses Go 1.24. A
quick check of the Kubernetes codebase reveals that &lt;code>Config.CurvePreferences&lt;/code>
is not explicitly set. This leads to a fascinating conclusion: Kubernetes
v1.33, by virtue of using Go 1.24, supports hybrid post-quantum
&lt;code>X25519MLKEM768&lt;/code> for TLS connections by default!&lt;/p>
&lt;p>You can test this yourself. If you set up a Minikube cluster running Kubernetes
v1.33.0, you can connect to the API server using a recent OpenSSL client:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-console" data-lang="console">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#000080;font-weight:bold">$&lt;/span> minikube start --kubernetes-version&lt;span style="color:#666">=&lt;/span>v1.33.0
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#000080;font-weight:bold">$&lt;/span> kubectl cluster-info
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">Kubernetes control plane is running at https://127.0.0.1:&amp;lt;PORT&amp;gt;
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">&lt;/span>&lt;span style="color:#000080;font-weight:bold">$&lt;/span> kubectl config view --minify --raw -o &lt;span style="color:#b8860b">jsonpath&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#b62;font-weight:bold">\&amp;#39;&lt;/span>&lt;span style="color:#666">{&lt;/span>.clusters&lt;span style="color:#666">[&lt;/span>0&lt;span style="color:#666">]&lt;/span>.cluster.certificate-authority-data&lt;span style="color:#666">}&lt;/span>&lt;span style="color:#b62;font-weight:bold">\&amp;#39;&lt;/span> | base64 -d &amp;gt; ca.crt
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#000080;font-weight:bold">$&lt;/span> openssl version
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">OpenSSL 3.5.0 8 Apr 2025 (Library: OpenSSL 3.5.0 8 Apr 2025)
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">&lt;/span>&lt;span style="color:#000080;font-weight:bold">$&lt;/span> &lt;span style="color:#a2f">echo&lt;/span> -n &lt;span style="color:#b44">&amp;#34;Q&amp;#34;&lt;/span> | openssl s_client -connect 127.0.0.1:&amp;lt;PORT&amp;gt; -CAfile ca.crt
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">[...]
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">Negotiated TLS1.3 group: X25519MLKEM768
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">[...]
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">DONE
&lt;/span>&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Lo and behold, the negotiated group is &lt;code>X25519MLKEM768&lt;/code>! This is a significant
step towards making Kubernetes quantum-safe, seemingly without a major
announcement or dedicated KEP (Kubernetes Enhancement Proposal).&lt;/p>
&lt;h2 id="the-go-version-mismatch-pitfall">The Go version mismatch pitfall&lt;/h2>
&lt;p>An interesting wrinkle emerged with Go versions 1.23 and 1.24. Go 1.23
included experimental support for a draft version of &lt;code>ML-KEM&lt;/code>, identified as
&lt;code>X25519Kyber768Draft00&lt;/code>. This was also enabled by default if
&lt;code>Config.CurvePreferences&lt;/code> was &lt;code>nil&lt;/code>. Kubernetes v1.32 used Go 1.23. However,
Go 1.24 removed the draft support and replaced it with the standardized version
&lt;code>X25519MLKEM768&lt;/code>.&lt;/p>
&lt;p>What happens if a client and server are using mismatched Go versions (one on
1.23, the other on 1.24)? They won't have a common PQC KEM to negotiate, and
the handshake will fall back to classical ECC curves (e.g., &lt;code>X25519&lt;/code>). How
could this happen in practice?&lt;/p>
&lt;p>Consider a scenario:&lt;/p>
&lt;p>A Kubernetes cluster is running v1.32 (using Go 1.23 and thus
&lt;code>X25519Kyber768Draft00&lt;/code>). A developer upgrades their &lt;code>kubectl&lt;/code> to v1.33,
compiled with Go 1.24, only supporting &lt;code>X25519MLKEM768&lt;/code>. Now, when &lt;code>kubectl&lt;/code>
communicates with the v1.32 API server, they no longer share a common PQC
algorithm. The connection will downgrade to classical cryptography, silently
losing the PQC protection that has been in place. This highlights the
importance of understanding the implications of Go version upgrades, and the
details of the TLS stack.&lt;/p>
&lt;h2 id="limitation-packet-size">Limitations: packet size&lt;/h2>
&lt;p>One practical consideration with &lt;code>ML-KEM&lt;/code> is the size of its public keys
with encoded key sizes of around 1.2 kilobytes for &lt;code>ML-KEM-768&lt;/code>.
This can cause the initial TLS &lt;code>ClientHello&lt;/code> message not to fit inside
a single TCP/IP packet, given the typical networking constraints
(most commonly, the standard Ethernet frame size limit of 1500
bytes). Some TLS libraries or network appliances might not handle this
gracefully, assuming the Client Hello always fits in one packet. This issue
has been observed in some Kubernetes-related projects and networking
components, potentially leading to connection failures when PQC KEMs are used.
More details can be found at &lt;a href="https://tldr.fail/">tldr.fail&lt;/a>.&lt;/p>
&lt;h2 id="state-of-post-quantum-signatures">State of Post-Quantum Signatures&lt;/h2>
&lt;p>While KEMs are seeing broader adoption, PQC digital signatures are further
behind in terms of widespread integration into standard toolchains. NIST has
published standards for PQC signatures, such as &lt;code>ML-DSA&lt;/code> (&lt;code>FIPS-204&lt;/code>) and
&lt;code>SLH-DSA&lt;/code> (&lt;code>FIPS-205&lt;/code>). However, implementing these in a way that's broadly
usable (e.g., for PQC Certificate Authorities) &lt;a href="https://blog.cloudflare.com/another-look-at-pq-signatures/#the-algorithms">presents challenges&lt;/a>:&lt;/p>
&lt;p>&lt;strong>Larger Keys and Signatures&lt;/strong>: PQC signature schemes often have significantly
larger public keys and signature sizes compared to classical algorithms like
Ed25519 or RSA. For instance, Dilithium2 keys can be 30 times larger than
Ed25519 keys, and certificates can be 12 times larger.&lt;/p>
&lt;p>&lt;strong>Performance&lt;/strong>: Signing and verification operations &lt;a href="https://pqshield.github.io/nist-sigs-zoo/">can be substantially slower&lt;/a>.
While some algorithms are on par with classical algorithms, others may have a
much higher overhead, sometimes on the order of 10x to 1000x worse performance.
To improve this situation, NIST is running a
&lt;a href="https://csrc.nist.gov/news/2024/pqc-digital-signature-second-round-announcement">second round of standardization&lt;/a> for PQC signatures.&lt;/p>
&lt;p>&lt;strong>Toolchain Support&lt;/strong>: Mainstream TLS libraries and CA software do not yet have
mature, built-in support for these new signature algorithms. The Go team, for
example, has indicated that &lt;code>ML-DSA&lt;/code> support is a high priority, but the
soonest it might appear in the standard library is Go 1.26 &lt;a href="https://github.com/golang/go/issues/64537#issuecomment-2877714729">(as of May 2025)&lt;/a>.&lt;/p>
&lt;p>&lt;a href="https://github.com/cloudflare/circl">Cloudflare's CIRCL&lt;/a> (Cloudflare Interoperable Reusable Cryptographic Library)
library implements some PQC signature schemes like variants of Dilithium, and
they maintain a &lt;a href="https://github.com/cloudflare/go">fork of Go (cfgo)&lt;/a> that integrates CIRCL. Using &lt;code>cfgo&lt;/code>, it's
possible to experiment with generating certificates signed with PQC algorithms
like Ed25519-Dilithium2. However, this requires using a custom Go toolchain and
is not yet part of the mainstream Kubernetes or Go distributions.&lt;/p>
&lt;h2 id="conclusion">Conclusion&lt;/h2>
&lt;p>The journey to a post-quantum secure Kubernetes is underway, and perhaps
further along than many realize, thanks to the proactive adoption of &lt;code>ML-KEM&lt;/code>
in Go. With Kubernetes v1.33, users are already benefiting from hybrid post-quantum key
exchange in many TLS connections by default.&lt;/p>
&lt;p>However, awareness of potential pitfalls, such as Go version mismatches leading
to downgrades and issues with Client Hello packet sizes, is crucial. While PQC
for KEMs is becoming a reality, PQC for digital signatures and certificate
hierarchies is still in earlier stages of development and adoption for
mainstream use. As Kubernetes maintainers and contributors, staying informed
about these developments will be key to ensuring the long-term security of the
platform.&lt;/p></description></item><item><title>Blog: Changes to Kubernetes Slack</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2025/06/16/changes-to-kubernetes-slack-2025/</link><pubDate>Mon, 16 Jun 2025 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2025/06/16/changes-to-kubernetes-slack-2025/</guid><description>
&lt;p>&lt;strong>UPDATE&lt;/strong>: We’ve received notice from Salesforce that our Slack workspace &lt;strong>WILL NOT BE DOWNGRADED&lt;/strong> on June 20th. Stand by for more details, but for now, there is no urgency to back up private channels or direct messages.&lt;/p>
&lt;p>&lt;del>Kubernetes Slack will lose its special status and will be changing into a standard free Slack on June 20, 2025&lt;/del>. Sometime later this year, our community may move to a new platform. If you are responsible for a channel or private channel, or a member of a User Group, you will need to take some actions as soon as you can.&lt;/p>
&lt;p>For the last decade, Slack has supported our project with a free customized enterprise account. They have let us know that they can no longer do so, particularly since our Slack is one of the largest and more active ones on the platform. As such, they will be downgrading it to a standard free Slack while we decide on, and implement, other options.&lt;/p>
&lt;p>On Friday, June 20, we will be subject to the &lt;a href="https://slack.com/help/articles/27204752526611-Feature-limitations-on-the-free-version-of-Slack">feature limitations of free Slack&lt;/a>. The primary ones which will affect us will be only retaining 90 days of history, and having to disable several apps and workflows which we are currently using. The Slack Admin team will do their best to manage these limitations.&lt;/p>
&lt;p>Responsible channel owners, members of private channels, and members of User Groups should &lt;a href="https://github.com/kubernetes/community/blob/master/communication/slack-migration-faq.md#what-actions-do-channel-owners-and-user-group-members-need-to-take-soon">take some actions&lt;/a> to prepare for the upgrade and preserve information as soon as possible.&lt;/p>
&lt;p>The CNCF projects staff have proposed that our community look at migrating to Discord. Because of existing issues where we have been pushing the limits of Slack, they have already explored what a Kubernetes Discord would look like. Discord would allow us to implement new tools and integrations which would help the community, such as GitHub group membership synchronization. The Steering Committee will discuss and decide on our future platform.&lt;/p>
&lt;p>Please see our &lt;a href="https://github.com/kubernetes/community/blob/master/communication/slack-migration-faq.md">FAQ&lt;/a>, and check the &lt;a href="https://groups.google.com/a/kubernetes.io/g/dev/">kubernetes-dev mailing list&lt;/a> and the &lt;a href="https://kubernetes.slack.com/archives/C9T0QMNG4">#announcements channel&lt;/a> for further news. If you have specific feedback on our Slack status join the &lt;a href="https://github.com/kubernetes/community/issues/8490">discussion on GitHub&lt;/a>.&lt;/p></description></item><item><title>Blog: Spotlight on SIG Apps</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2025/03/12/sig-apps-spotlight-2025/</link><pubDate>Wed, 12 Mar 2025 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2025/03/12/sig-apps-spotlight-2025/</guid><description>
&lt;p>In our ongoing SIG Spotlight series, we dive into the heart of the Kubernetes project by talking to
the leaders of its various Special Interest Groups (SIGs). This time, we focus on
&lt;strong>&lt;a href="https://github.com/kubernetes/community/tree/master/sig-apps#apps-special-interest-group">SIG Apps&lt;/a>&lt;/strong>,
the group responsible for everything related to developing, deploying, and operating applications on
Kubernetes. &lt;a href="https://www.linkedin.com/in/sandipanpanda">Sandipan Panda&lt;/a>
(&lt;a href="https://www.devzero.io/">DevZero&lt;/a>) had the opportunity to interview &lt;a href="https://github.com/soltysh">Maciej
Szulik&lt;/a> (&lt;a href="https://defenseunicorns.com/">Defense Unicorns&lt;/a>) and &lt;a href="https://github.com/janetkuo">Janet
Kuo&lt;/a> (&lt;a href="https://about.google/">Google&lt;/a>), the chairs and tech leads of
SIG Apps. They shared their experiences, challenges, and visions for the future of application
management within the Kubernetes ecosystem.&lt;/p>
&lt;h2 id="introductions">Introductions&lt;/h2>
&lt;p>&lt;strong>Sandipan: Hello, could you start by telling us a bit about yourself, your role, and your journey
within the Kubernetes community that led to your current roles in SIG Apps?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Maciej&lt;/strong>: Hey, my name is Maciej, and I’m one of the leads for SIG Apps. Aside from this role, you
can also find me helping
&lt;a href="https://github.com/kubernetes/community/tree/master/sig-cli#readme">SIG CLI&lt;/a> and also being one of
the Steering Committee members. I’ve been contributing to Kubernetes since late 2014 in various
areas, including controllers, apiserver, and kubectl.&lt;/p>
&lt;p>&lt;strong>Janet&lt;/strong>: Certainly! I&amp;rsquo;m Janet, a Staff Software Engineer at Google, and I&amp;rsquo;ve been deeply involved
with the Kubernetes project since its early days, even before the 1.0 launch in 2015. It&amp;rsquo;s been an
amazing journey!&lt;/p>
&lt;p>My current role within the Kubernetes community is one of the chairs and tech leads of SIG Apps. My
journey with SIG Apps started organically. I started with building the &lt;code>Deployment&lt;/code> API and adding
rolling update functionalities. I naturally gravitated towards SIG Apps and became increasingly
involved. Over time, I took on more responsibilities, culminating in my current leadership roles.&lt;/p>
&lt;h2 id="about-sig-apps">About SIG Apps&lt;/h2>
&lt;p>&lt;em>All following answers were jointly provided by Maciej and Janet.&lt;/em>&lt;/p>
&lt;p>&lt;strong>Sandipan: For those unfamiliar, could you provide an overview of SIG Apps&amp;rsquo; mission and objectives?
What key problems does it aim to solve within the Kubernetes ecosystem?&lt;/strong>&lt;/p>
&lt;p>As described in our
&lt;a href="https://github.com/kubernetes/community/blob/master/sig-apps/charter.md#scope">charter&lt;/a>, we cover a
broad area related to developing, deploying, and operating applications on Kubernetes. That, in
short, means we’re open to each and everyone showing up at our bi-weekly meetings and discussing the
ups and downs of writing and deploying various applications on Kubernetes.&lt;/p>
&lt;p>&lt;strong>Sandipan: What are some of the most significant projects or initiatives currently being undertaken
by SIG Apps?&lt;/strong>&lt;/p>
&lt;p>At this point in time, the main factors driving the development of our controllers are the
challenges coming from running various AI-related workloads. It’s worth giving credit here to two
working groups we’ve sponsored over the past years:&lt;/p>
&lt;ol>
&lt;li>&lt;a href="https://github.com/kubernetes/community/tree/master/wg-batch">The Batch Working Group&lt;/a>, which is
looking at running HPC, AI/ML, and data analytics jobs on top of Kubernetes.&lt;/li>
&lt;li>&lt;a href="https://github.com/kubernetes/community/tree/master/wg-serving">The Serving Working Group&lt;/a>, which
is focusing on hardware-accelerated AI/ML inference.&lt;/li>
&lt;/ol>
&lt;h2 id="best-practices-and-challenges">Best practices and challenges&lt;/h2>
&lt;p>&lt;strong>Sandipan: SIG Apps plays a crucial role in developing application management best practices for
Kubernetes. Can you share some of these best practices and how they help improve application
lifecycle management?&lt;/strong>&lt;/p>
&lt;ol>
&lt;li>
&lt;p>Implementing &lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/">health checks and readiness probes&lt;/a>
ensures that your applications are healthy and ready to serve traffic, leading to improved
reliability and uptime. The above, combined with comprehensive logging, monitoring, and tracing
solutions, will provide insights into your application&amp;rsquo;s behavior, enabling you to identify and
resolve issues quickly.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;a href="https://kubernetes.io/docs/concepts/workloads/autoscaling/">Auto-scale your application&lt;/a> based
on resource utilization or custom metrics, optimizing resource usage and ensuring your
application can handle varying loads.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Use &lt;code>Deployment&lt;/code> for stateless applications, &lt;code>StatefulSet&lt;/code> for stateful applications, &lt;code>Job&lt;/code>
and &lt;code>CronJob&lt;/code> for batch workloads, and &lt;code>DaemonSet&lt;/code> for running a daemon on each node. Use
Operators and CRDs to extend the Kubernetes API to automate the deployment, management, and
lifecycle of complex applications, making them easier to operate and reducing manual
intervention.&lt;/p>
&lt;/li>
&lt;/ol>
&lt;p>&lt;strong>Sandipan: What are some of the common challenges SIG Apps faces, and how do you address them?&lt;/strong>&lt;/p>
&lt;p>The biggest challenge we’re facing all the time is the need to reject a lot of features, ideas, and
improvements. This requires a lot of discipline and patience to be able to explain the reasons
behind those decisions.&lt;/p>
&lt;p>&lt;strong>Sandipan: How has the evolution of Kubernetes influenced the work of SIG Apps? Are there any
recent changes or upcoming features in Kubernetes that you find particularly relevant or beneficial
for SIG Apps?&lt;/strong>&lt;/p>
&lt;p>The main benefit for both us and the whole community around SIG Apps is the ability to extend
kubernetes with &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/">Custom Resource Definitions&lt;/a>
and the fact that users can build their own custom controllers leveraging the built-in ones to
achieve whatever sophisticated use cases they might have and we, as the core maintainers, haven’t
considered or weren’t able to efficiently resolve inside Kubernetes.&lt;/p>
&lt;h2 id="contributing-to-sig-apps">Contributing to SIG Apps&lt;/h2>
&lt;p>&lt;strong>Sandipan: What opportunities are available for new contributors who want to get involved with SIG
Apps, and what advice would you give them?&lt;/strong>&lt;/p>
&lt;p>We get the question, &amp;ldquo;What good first issue might you recommend we start with?&amp;rdquo; a lot :-) But
unfortunately, there’s no easy answer to it. We always tell everyone that the best option to start
contributing to core controllers is to find one you are willing to spend some time with. Read
through the code, then try running unit tests and integration tests focusing on that
controller. Once you grasp the general idea, try breaking it and the tests again to verify your
breakage. Once you start feeling confident you understand that particular controller, you may want
to search through open issues affecting that controller and either provide suggestions, explaining
the problem users have, or maybe attempt your first fix.&lt;/p>
&lt;p>Like we said, there are no shortcuts on that road; you need to spend the time with the codebase to
understand all the edge cases we’ve slowly built up to get to the point where we are. Once you’re
successful with one controller, you’ll need to repeat that same process with others all over again.&lt;/p>
&lt;p>&lt;strong>Sandipan: How does SIG Apps gather feedback from the community, and how is this feedback
integrated into your work?&lt;/strong>&lt;/p>
&lt;p>We always encourage everyone to show up and present their problems and solutions during our
bi-weekly &lt;a href="https://github.com/kubernetes/community/tree/master/sig-apps#meetings">meetings&lt;/a>. As long
as you’re solving an interesting problem on top of Kubernetes and you can provide valuable feedback
about any of the core controllers, we’re always happy to hear from everyone.&lt;/p>
&lt;h2 id="looking-ahead">Looking ahead&lt;/h2>
&lt;p>&lt;strong>Sandipan: Looking ahead, what are the key focus areas or upcoming trends in application management
within Kubernetes that SIG Apps is excited about? How is the SIG adapting to these trends?&lt;/strong>&lt;/p>
&lt;p>Definitely the current AI hype is the major driving factor; as mentioned above, we have two working
groups, each covering a different aspect of it.&lt;/p>
&lt;p>&lt;strong>Sandipan: What are some of your favorite things about this SIG?&lt;/strong>&lt;/p>
&lt;p>Without a doubt, the people that participate in our meetings and on
&lt;a href="https://kubernetes.slack.com/messages/sig-apps">Slack&lt;/a>, who tirelessly help triage issues, pull
requests and invest a lot of their time (very frequently their private time) into making kubernetes
great!&lt;/p>
&lt;hr>
&lt;p>SIG Apps is an essential part of the Kubernetes community, helping to shape how applications are
deployed and managed at scale. From its work on improving Kubernetes&amp;rsquo; workload APIs to driving
innovation in AI/ML application management, SIG Apps is continually adapting to meet the needs of
modern application developers and operators. Whether you’re a new contributor or an experienced
developer, there’s always an opportunity to get involved and make an impact.&lt;/p>
&lt;p>If you’re interested in learning more or contributing to SIG Apps, be sure to check out their &lt;a href="https://github.com/kubernetes/community/tree/master/sig-apps">SIG
README&lt;/a> and join their bi-weekly &lt;a href="https://github.com/kubernetes/community/tree/master/sig-apps#meetings">meetings&lt;/a>.&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://groups.google.com/a/kubernetes.io/g/sig-apps">SIG Apps Mailing List&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://kubernetes.slack.com/messages/sig-apps">SIG Apps on Slack&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>Blog: Spotlight on SIG etcd</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2025/03/04/sig-etcd-spotlight/</link><pubDate>Tue, 04 Mar 2025 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2025/03/04/sig-etcd-spotlight/</guid><description>
&lt;p>In this SIG etcd spotlight we talked with &lt;a href="https://github.com/jmhbnz">James Blair&lt;/a>, &lt;a href="https://github.com/serathius">Marek
Siarkowicz&lt;/a>, &lt;a href="https://github.com/wenjiaswe">Wenjia Zhang&lt;/a>, and
&lt;a href="https://github.com/ahrtr">Benjamin Wang&lt;/a> to learn a bit more about this Kubernetes Special Interest
Group.&lt;/p>
&lt;h2 id="introducing-sig-etcd">Introducing SIG etcd&lt;/h2>
&lt;p>&lt;strong>Frederico: Hello, thank you for the time! Let’s start with some introductions, could you tell us a
bit about yourself, your role and how you got involved in Kubernetes.&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Benjamin:&lt;/strong> Hello, I am Benjamin. I am a SIG etcd Tech Lead and one of the etcd maintainers. I
work for VMware, which is part of the Broadcom group. I got involved in Kubernetes &amp;amp; etcd &amp;amp; CSI
(&lt;a href="https://github.com/container-storage-interface/spec/blob/master/spec.md">Container Storage Interface&lt;/a>)
because of work and also a big passion for open source. I have been working on Kubernetes &amp;amp; etcd
(and also CSI) since 2020.&lt;/p>
&lt;p>&lt;strong>James:&lt;/strong> Hey team, I’m James, a co-chair for SIG etcd and etcd maintainer. I work at Red Hat as a
Specialist Architect helping people adopt cloud native technology. I got involved with the
Kubernetes ecosystem in 2019. Around the end of 2022 I noticed how the etcd community and project
needed help so started contributing as often as I could. There is a saying in our community that
&amp;ldquo;you come for the technology, and stay for the people&amp;rdquo;: for me this is absolutely real, it’s been a
wonderful journey so far and I’m excited to support our community moving forward.&lt;/p>
&lt;p>&lt;strong>Marek:&lt;/strong> Hey everyone, I&amp;rsquo;m Marek, the SIG etcd lead. At Google, I lead the GKE etcd team, ensuring
a stable and reliable experience for all GKE users. My Kubernetes journey began with &lt;a href="https://github.com/kubernetes/community/tree/master/sig-instrumentation">SIG
Instrumentation&lt;/a>, where I
created and led the &lt;a href="https://kubernetes.io/blog/2020/09/04/kubernetes-1-19-introducing-structured-logs/">Kubernetes Structured Logging effort&lt;/a>.&lt;br>
I&amp;rsquo;m still the main project lead for &lt;a href="https://kubernetes-sigs.github.io/metrics-server/">Kubernetes Metrics Server&lt;/a>,
providing crucial signals for autoscaling in Kubernetes. I started working on etcd 3 years ago,
right around the 3.5 release. We faced some challenges, but I&amp;rsquo;m thrilled to see etcd now the most
scalable and reliable it&amp;rsquo;s ever been, with the highest contribution numbers in the project&amp;rsquo;s
history. I&amp;rsquo;m passionate about distributed systems, extreme programming, and testing.&lt;/p>
&lt;p>&lt;strong>Wenjia:&lt;/strong> Hi there, my name is Wenjia, I am the co-chair of SIG etcd and one of the etcd
maintainers. I work at Google as an Engineering Manager, working on GKE (Google Kubernetes Engine)
and GDC (Google Distributed Cloud). I have been working in the area of open source Kubernetes and
etcd since the Kubernetes v1.10 and etcd v3.1 releases. I got involved in Kubernetes because of my
job, but what keeps me in the space is the charm of the container orchestration technology, and more
importantly, the awesome open source community.&lt;/p>
&lt;h2 id="becoming-a-kubernetes-special-interest-group-sig">Becoming a Kubernetes Special Interest Group (SIG)&lt;/h2>
&lt;p>&lt;strong>Frederico: Excellent, thank you. I&amp;rsquo;d like to start with the origin of the SIG itself: SIG etcd is
a very recent SIG, could you quickly go through the history and reasons behind its creation?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Marek&lt;/strong>: Absolutely! SIG etcd was formed because etcd is a critical component of Kubernetes,
serving as its data store. However, etcd was facing challenges like maintainer turnover and
reliability issues. &lt;a href="https://etcd.io/blog/2023/introducing-sig-etcd/">Creating a dedicated SIG&lt;/a>
allowed us to focus on addressing these problems, improving development and maintenance processes,
and ensuring etcd evolves in sync with the cloud-native landscape.&lt;/p>
&lt;p>&lt;strong>Frederico: And has becoming a SIG worked out as expected? Better yet, are the motivations you just
described being addressed, and to what extent?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Marek&lt;/strong>: It&amp;rsquo;s been a positive change overall. Becoming a SIG has brought more structure and
transparency to etcd&amp;rsquo;s development. We&amp;rsquo;ve adopted Kubernetes processes like KEPs
(&lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/README.md">Kubernetes Enhancement Proposals&lt;/a>
and PRRs (&lt;a href="https://github.com/kubernetes/community/blob/master/sig-architecture/production-readiness.md">Production Readiness Reviews&lt;/a>,
which has improved our feature development and release cycle.&lt;/p>
&lt;p>&lt;strong>Frederico: On top of those, what would you single out as the major benefit that has resulted from
becoming a SIG?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Marek&lt;/strong>: The biggest benefits for me was adopting Kubernetes testing infrastructure, tools like
&lt;a href="https://docs.prow.k8s.io/">Prow&lt;/a> and &lt;a href="https://testgrid.k8s.io/">TestGrid&lt;/a>. For large projects like
etcd there is just no comparison to the default GitHub tooling. Having known, easy to use, clear
tools is a major boost to the etcd as it makes it much easier for Kubernetes contributors to also
help etcd.&lt;/p>
&lt;p>&lt;strong>Wenjia&lt;/strong>: Totally agree, while challenges remain, the SIG structure provides a solid foundation
for addressing them and ensuring etcd&amp;rsquo;s continued success as a critical component of the Kubernetes
ecosystem.&lt;/p>
&lt;p>The positive impact on the community is another crucial aspect of SIG etcd&amp;rsquo;s success that I’d like
to highlight. The Kubernetes SIG structure has created a welcoming environment for etcd
contributors, leading to increased participation from the broader Kubernetes community. We have had
greater collaboration with other SIGs like &lt;a href="https://github.com/kubernetes/community/blob/master/sig-api-machinery/README.md">SIG API
Machinery&lt;/a>,
&lt;a href="https://github.com/kubernetes/community/tree/master/sig-scalability">SIG Scalability&lt;/a>,
&lt;a href="https://github.com/kubernetes/community/tree/master/sig-scalability">SIG Testing&lt;/a>,
&lt;a href="https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle">SIG Cluster Lifecycle&lt;/a>, etc.&lt;/p>
&lt;p>This collaboration helps ensure etcd&amp;rsquo;s development aligns with the needs of the wider Kubernetes
ecosystem. The formation of the &lt;a href="https://github.com/kubernetes/community/blob/master/wg-etcd-operator/README.md">etcd Operator Working Group&lt;/a>
under the joint effort between SIG etcd and SIG Cluster Lifecycle exemplifies this successful
collaboration, demonstrating a shared commitment to improving etcd&amp;rsquo;s operational aspects within
Kubernetes.&lt;/p>
&lt;p>&lt;strong>Frederico: Since you mentioned collaboration, have you seen changes in terms of contributors and
community involvement in recent months?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>James&lt;/strong>: Yes &amp;ndash; as showing in our
&lt;a href="https://etcd.devstats.cncf.io/d/23/prs-authors-repository-groups?orgId=1&amp;var-period=m&amp;var-repogroup_name=All&amp;from=1422748800000&amp;to=1738454399000">unique PR author data&lt;/a>
we recently hit an all time high in March and are trending in a positive direction:&lt;/p>
&lt;figure>
&lt;img src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2025/sig-etcd-spotlight/stats.png"
alt="Unique PR author data stats"/>
&lt;/figure>
&lt;p>Additionally, looking at our
&lt;a href="https://etcd.devstats.cncf.io/d/74/contributions-chart?orgId=1&amp;from=1422748800000&amp;to=1738454399000&amp;var-period=m&amp;var-metric=contributions&amp;var-repogroup_name=All&amp;var-country_name=All&amp;var-company_name=All&amp;var-company=all">overall contributions across all etcd project repositories&lt;/a>
we are also observing a positive trend showing a resurgence in etcd project activity:&lt;/p>
&lt;figure>
&lt;img src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2025/sig-etcd-spotlight/stats2.png"
alt="Overall contributions stats"/>
&lt;/figure>
&lt;h2 id="the-road-ahead">The road ahead&lt;/h2>
&lt;p>&lt;strong>Frederico: That&amp;rsquo;s quite telling, thank you. In terms of the near future, what are the current
priorities for SIG etcd?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Marek&lt;/strong>: Reliability is always top of mind -– we need to make sure etcd is rock-solid. We&amp;rsquo;re also
working on making etcd easier to use and manage for operators. And we have our sights set on making
etcd a viable standalone solution for infrastructure management, not just for Kubernetes. Oh, and of
course, scaling -– we need to ensure etcd can handle the growing demands of the cloud-native world.&lt;/p>
&lt;p>&lt;strong>Benjamin&lt;/strong>: I agree that reliability should always be our top guiding principle. We need to ensure
not only correctness but also compatibility. Additionally, we should continuously strive to improve
the understandability and maintainability of etcd. Our focus should be on addressing the pain points
that the community cares about the most.&lt;/p>
&lt;p>&lt;strong>Frederico: Are there any specific SIGs that you work closely with?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Marek&lt;/strong>: SIG API Machinery, for sure – they own the structure of the data etcd stores, so we&amp;rsquo;re
constantly working together. And SIG Cluster Lifecycle – etcd is a key part of Kubernetes clusters,
so we collaborate on the newly created etcd operator Working group.&lt;/p>
&lt;p>&lt;strong>Wenjia&lt;/strong>: Other than SIG API Machinery and SIG Cluster Lifecycle that Marek mentioned above, SIG
Scalability and SIG Testing is another group that we work closely with.&lt;/p>
&lt;p>&lt;strong>Frederico: In a more general sense, how would you list the key challenges for SIG etcd in the
evolving cloud native landscape?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Marek&lt;/strong>: Well, reliability is always a challenge when you&amp;rsquo;re dealing with critical data. The
cloud-native world is evolving so fast that scaling to meet those demands is a constant effort.&lt;/p>
&lt;h2 id="getting-involved">Getting involved&lt;/h2>
&lt;p>&lt;strong>Frederico: We&amp;rsquo;re almost at the end of our conversation, but for those interested in in etcd, how
can they get involved?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Marek&lt;/strong>: We&amp;rsquo;d love to have them! The best way to start is to join our
&lt;a href="https://github.com/kubernetes/community/blob/master/sig-etcd/README.md#meetings">SIG etcd meetings&lt;/a>,
follow discussions on the &lt;a href="https://groups.google.com/g/etcd-dev">etcd-dev mailing list&lt;/a>, and check
out our &lt;a href="https://github.com/etcd-io/etcd/issues">GitHub issues&lt;/a>. We&amp;rsquo;re always looking for people to
review proposals, test code, and contribute to documentation.&lt;/p>
&lt;p>&lt;strong>Wenjia&lt;/strong>: I love this question 😀 . There are numerous ways for people interested in contributing
to SIG etcd to get involved and make a difference. Here are some key areas where you can help:&lt;/p>
&lt;p>&lt;strong>Code Contributions&lt;/strong>:&lt;/p>
&lt;ul>
&lt;li>&lt;em>Bug Fixes&lt;/em>: Tackle existing issues in the etcd codebase. Start with issues labeled &amp;ldquo;good first
issue&amp;rdquo; or &amp;ldquo;help wanted&amp;rdquo; to find tasks that are suitable for newcomers.&lt;/li>
&lt;li>&lt;em>Feature Development&lt;/em>: Contribute to the development of new features and enhancements. Check the
etcd roadmap and discussions to see what&amp;rsquo;s being planned and where your skills might fit in.&lt;/li>
&lt;li>&lt;em>Testing and Code Reviews&lt;/em>: Help ensure the quality of etcd by writing tests, reviewing code
changes, and providing feedback.&lt;/li>
&lt;li>&lt;em>Documentation&lt;/em>: Improve &lt;a href="https://etcd.io/docs/">etcd&amp;rsquo;s documentation&lt;/a> by adding new content,
clarifying existing information, or fixing errors. Clear and comprehensive documentation is
essential for users and contributors.&lt;/li>
&lt;li>&lt;em>Community Support&lt;/em>: Answer questions on forums, mailing lists, or &lt;a href="https://kubernetes.slack.com/archives/C3HD8ARJ5">Slack channels&lt;/a>.
Helping others understand and use etcd is a valuable contribution.&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Getting Started&lt;/strong>:&lt;/p>
&lt;ul>
&lt;li>&lt;em>Join the community&lt;/em>: Start by joining the etcd community on Slack,
attending SIG meetings, and following the mailing lists. This will
help you get familiar with the project, its processes, and the
people involved.&lt;/li>
&lt;li>&lt;em>Find a mentor&lt;/em>: If you&amp;rsquo;re new to open source or etcd, consider
finding a mentor who can guide you and provide support. Stay tuned!
Our first cohort of mentorship program was very successful. We will
have a new round of mentorship program coming up.&lt;/li>
&lt;li>&lt;em>Start small&lt;/em>: Don&amp;rsquo;t be afraid to start with small contributions. Even
fixing a typo in the documentation or submitting a simple bug fix
can be a great way to get involved.&lt;/li>
&lt;/ul>
&lt;p>By contributing to etcd, you&amp;rsquo;ll not only be helping to improve a
critical piece of the cloud-native ecosystem but also gaining valuable
experience and skills. So, jump in and start contributing!&lt;/p>
&lt;p>&lt;strong>Frederico: Excellent, thank you. Lastly, one piece of advice that
you&amp;rsquo;d like to give to other newly formed SIGs?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Marek&lt;/strong>: Absolutely! My advice would be to embrace the established
processes of the larger community, prioritize collaboration with other
SIGs, and focus on building a strong community.&lt;/p>
&lt;p>&lt;strong>Wenjia&lt;/strong>: Here are some tips I myself found very helpful in my OSS
journey:&lt;/p>
&lt;ul>
&lt;li>&lt;em>Be patient&lt;/em>: Open source development can take time. Don&amp;rsquo;t get
discouraged if your contributions aren&amp;rsquo;t accepted immediately or if
you encounter challenges.&lt;/li>
&lt;li>&lt;em>Be respectful&lt;/em>: The etcd community values collaboration and
respect. Be mindful of others&amp;rsquo; opinions and work together to achieve
common goals.&lt;/li>
&lt;li>&lt;em>Have fun&lt;/em>: Contributing to open source should be
enjoyable. Find areas that interest you and contribute in ways that
you find fulfilling.&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Frederico: A great way to end this spotlight, thank you all!&lt;/strong>&lt;/p>
&lt;hr>
&lt;p>For more information and resources, please take a look at :&lt;/p>
&lt;ol>
&lt;li>etcd website: &lt;a href="https://etcd.io/">https://etcd.io/&lt;/a>&lt;/li>
&lt;li>etcd GitHub repository: &lt;a href="https://github.com/etcd-io/etcd">https://github.com/etcd-io/etcd&lt;/a>&lt;/li>
&lt;li>etcd community: &lt;a href="https://etcd.io/community/">https://etcd.io/community/&lt;/a>&lt;/li>
&lt;/ol></description></item><item><title>Blog: Spotlight on SIG Architecture: Enhancements</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2025/01/21/sig-architecture-enhancements/</link><pubDate>Tue, 21 Jan 2025 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2025/01/21/sig-architecture-enhancements/</guid><description>
&lt;p>&lt;em>This is the fourth interview of a SIG Architecture Spotlight series that will cover the different
subprojects, and we will be covering &lt;a href="https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#enhancements">SIG Architecture:
Enhancements&lt;/a>.&lt;/em>&lt;/p>
&lt;p>In this SIG Architecture spotlight we talked with &lt;a href="https://github.com/kikisdeliveryservice">Kirsten
Garrison&lt;/a>, lead of the Enhancements subproject.&lt;/p>
&lt;h2 id="the-enhancements-subproject">The Enhancements subproject&lt;/h2>
&lt;p>&lt;strong>Frederico (FSM): Hi Kirsten, very happy to have the opportunity to talk about the Enhancements
subproject. Let&amp;rsquo;s start with some quick information about yourself and your role.&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Kirsten Garrison (KG)&lt;/strong>: I’m a lead of the Enhancements subproject of SIG-Architecture and
currently work at Google. I first got involved by contributing to the service-catalog project with
the help of &lt;a href="https://github.com/carolynvs">Carolyn Van Slyck&lt;/a>. With time, &lt;a href="https://github.com/kubernetes/sig-release/blob/master/releases/release-1.17/release_team.md">I joined the Release
team&lt;/a>,
eventually becoming the Enhancements Lead and a Release Lead shadow. While on the release team, I
worked on some ideas to make the process better for the SIGs and Enhancements team (the opt-in
process) based on my team’s experiences. Eventually, I started attending Subproject meetings and
contributing to the Subproject’s work.&lt;/p>
&lt;p>&lt;strong>FSM: You mentioned the Enhancements subproject: how would you describe its main goals and areas of
intervention?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>KG&lt;/strong>: The &lt;a href="https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#enhancements">Enhancements
Subproject&lt;/a>
primarily concerns itself with the &lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-architecture/0000-kep-process/README.md">Kubernetes Enhancement
Proposal&lt;/a>
(&lt;em>KEP&lt;/em> for short)—the &amp;ldquo;design&amp;rdquo; documents required for all features and significant changes
to the Kubernetes project.&lt;/p>
&lt;h2 id="the-kep-and-its-impact">The KEP and its impact&lt;/h2>
&lt;p>&lt;strong>FSM: The improvement of the KEP process was (and is) one in which SIG Architecture was heavily
involved. Could you explain the process to those that aren’t aware of it?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>KG&lt;/strong>: &lt;a href="https://kubernetes.io/releases/release/#the-release-cycle">Every release&lt;/a>, the SIGs let the
Release Team know which features they intend to work on to be put into the release. As mentioned
above, the prerequisite for these changes is a KEP - a standardized design document that all authors
must fill out and approve in the first weeks of the release cycle. Most features &lt;a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/#feature-stages">will move
through 3
phases&lt;/a>:
alpha, beta and finally GA so approving a feature represents a significant commitment for the SIG.&lt;/p>
&lt;p>The KEP serves as the full source of truth of a feature. The &lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/NNNN-kep-template/README.md">KEP
template&lt;/a>
has different requirements based on what stage a feature is in, but it generally requires a detailed
discussion of the design and the impact as well as providing artifacts of stability and
performance. The KEP takes quite a bit of iterative work between authors, SIG reviewers, api review
team and the Production Readiness Review team&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup> before it is approved. Each set of reviewers is
looking to make sure that the proposal meets their standards in order to have a stable and
performant Kubernetes release. Only after all approvals are secured, can an author go forth and
merge their feature in the Kubernetes code base.&lt;/p>
&lt;p>&lt;strong>FSM: I see, quite a bit of additional structure was added. Looking back, what were the most
significant improvements of that approach?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>KG&lt;/strong>: In general, I think that the improvements with the most impact had to do with focusing on
the core intent of the KEP. KEPs exist not just to memorialize designs, but provide a structured way
to discuss and come to an agreement about different facets of the change. At the core of the KEP
process is communication and consideration.&lt;/p>
&lt;p>To that end, some of the significant changes revolve around a more detailed and accessible KEP
template. A significant amount of work was put in over time to get the
&lt;a href="https://github.com/kubernetes/enhancements">k/enhancements&lt;/a> repo into its current form &amp;ndash; a
directory structure organized by SIG with the contours of the modern KEP template (with
Proposal/Motivation/Design Details subsections). We might take that basic structure for granted
today, but it really represents the work of many people trying to get the foundation of this process
in place over time.&lt;/p>
&lt;p>As Kubernetes matures, we’ve needed to think about more than just the end goal of getting a single
feature merged. We need to think about things like: stability, performance, setting and meeting user
expectations. And as we’ve thought about those things the template has grown more detailed. The
addition of the Production Readiness Review was major as well as the enhanced testing requirements
(varying at different stages of a KEP’s lifecycle).&lt;/p>
&lt;h2 id="current-areas-of-focus">Current areas of focus&lt;/h2>
&lt;p>&lt;strong>FSM: Speaking of maturing, we’ve &lt;a href="https://kubernetes.io/blog/2024/08/13/kubernetes-v1-31-release/">recently released Kubernetes
v1.31&lt;/a>, and work on v1.32 &lt;a href="https://github.com/fsmunoz/sig-release/tree/release-1.32/releases/release-1.32">has
started&lt;/a>. Are there
any areas that the Enhancements sub-project is currently addressing that might change the way things
are done?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>KG&lt;/strong>: We’re currently working on two things:&lt;/p>
&lt;ol>
&lt;li>&lt;em>Creating a Process KEP template.&lt;/em> Sometimes people want to harness the KEP process for
significant changes that are more process oriented rather than feature oriented. We want to
support this because memorializing changes is important and giving people a better tool to do so
will only encourage more discussion and transparency.&lt;/li>
&lt;li>&lt;em>KEP versioning.&lt;/em> While our template changes aim to be as non-disruptive as possible, we
believe that it will be easier to track and communicate those changes to the community better with
a versioned KEP template and the policies that go alongside such versioning.&lt;/li>
&lt;/ol>
&lt;p>Both features will take some time to get right and fully roll out (just like a KEP feature) but we
believe that they will both provide improvements that will benefit the community at large.&lt;/p>
&lt;p>&lt;strong>FSM: You mentioned improvements: I remember when project boards for Enhancement tracking were
introduced in recent releases, to great effect and unanimous applause from release team members. Was
this a particular area of focus for the subproject?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>KG&lt;/strong>: The Subproject provided support to the Release Team’s Enhancement team in the migration away
from using the spreadsheet to a project board. The collection and tracking of enhancements has
always been a logistical challenge. During my time on the Release Team, I helped with the transition
to an opt-in system of enhancements, whereby the SIG leads &amp;ldquo;opt-in&amp;rdquo; KEPs for release tracking. This
helped to enhance communication between authors and SIGs before any significant work was undertaken
on a KEP and removed toil from the Enhancements team. This change used the existing tools to avoid
introducing too many changes at once to the community. Later, the Release Team approached the
Subproject with an idea of leveraging GitHub Project Boards to further improve the collection
process. This was to be a move away from the use of complicated spreadsheets to using repo-native
labels on &lt;a href="https://github.com/kubernetes/enhancements">k/enhancement&lt;/a> issues and project boards.&lt;/p>
&lt;p>&lt;strong>FSM: That surely adds an impact on simplifying the workflow&amp;hellip;&lt;/strong>&lt;/p>
&lt;p>&lt;strong>KG&lt;/strong>: Removing sources of friction and promoting clear communication is very important to the
Enhancements Subproject. At the same time, it’s important to give careful consideration to
decisions that impact the community as a whole. We want to make sure that changes are balanced to
give an upside and while not causing any regressions and pain in the rollout. We supported the
Release Team in ideation as well as through the actual migration to the project boards. It was a
great success and exciting to see the team make high impact changes that helped everyone involved in
the KEP process!&lt;/p>
&lt;h2 id="getting-involved">Getting involved&lt;/h2>
&lt;p>&lt;strong>FSM: For those reading that might be curious and interested in helping, how would you describe the
required skills for participating in the sub-project?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>KG&lt;/strong>: Familiarity with KEPs either via experience or taking time to look through the
kubernetes/enhancements repo is helpful. All are welcome to participate if interested - we can take
it from there.&lt;/p>
&lt;p>&lt;strong>FSM: Excellent! Many thanks for your time and insight &amp;ndash; any final comments you would like to
share with our readers?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>KG&lt;/strong>: The Enhancements process is one of the most important parts of Kubernetes and requires
enormous amounts of coordination and collaboration of people and teams across the project to make it
successful. I’m thankful and inspired by everyone’s continued hard work and dedication to making the
project great. This is truly a wonderful community.&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>For more information, check the &lt;a href="https://kubernetes.io/blog/2023/11/02/sig-architecture-production-readiness-spotlight-2023/">Production Readiness Review spotlight
interview&lt;/a>
in this series.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description></item><item><title>Blog: Spotlight on Kubernetes Upstream Training in Japan</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/10/28/k8s-upstream-training-japan-spotlight/</link><pubDate>Mon, 28 Oct 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/10/28/k8s-upstream-training-japan-spotlight/</guid><description>
&lt;p>We are organizers of &lt;a href="https://github.com/kubernetes-sigs/contributor-playground/tree/master/japan">Kubernetes Upstream Training in Japan&lt;/a>.
Our team is composed of members who actively contribute to Kubernetes, including individuals who hold roles such as member, reviewer, approver, and chair.&lt;/p>
&lt;p>Our goal is to increase the number of Kubernetes contributors and foster the growth of the community.
While Kubernetes community is friendly and collaborative, newcomers may find the first step of contributing to be a bit challenging.
Our training program aims to lower that barrier and create an environment where even beginners can participate smoothly.&lt;/p>
&lt;h2 id="what-is-kubernetes-upstream-training-in-japan">What is Kubernetes upstream training in Japan?&lt;/h2>
&lt;p>&lt;img alt="Upstream Training in 2022" src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/k8s-upstream-training-japan-spotlight/ood-2022-01.png">&lt;/p>
&lt;p>Our training started in 2019 and is held 1 to 2 times a year.
Initially, Kubernetes Upstream Training was conducted as a co-located event of KubeCon (Kubernetes Contributor Summit),
but we launched Kubernetes Upstream Training in Japan with the aim of increasing Japanese contributors by hosting a similar event in Japan.&lt;/p>
&lt;p>Before the pandemic, the training was held in person, but since 2020, it has been conducted online.
The training offers the following content for those who have not yet contributed to Kubernetes:&lt;/p>
&lt;ul>
&lt;li>Introduction to Kubernetes community&lt;/li>
&lt;li>Overview of Kubernetes codebase and how to create your first PR&lt;/li>
&lt;li>Tips and encouragement to lower participation barriers, such as language&lt;/li>
&lt;li>How to set up the development environment&lt;/li>
&lt;li>Hands-on session using &lt;a href="https://github.com/kubernetes-sigs/contributor-playground">kubernetes-sigs/contributor-playground&lt;/a>&lt;/li>
&lt;/ul>
&lt;p>At the beginning of the program, we explain why contributing to Kubernetes is important and who can contribute.
We emphasize that contributing to Kubernetes allows you to make a global impact and that Kubernetes community is looking forward to your contributions!&lt;/p>
&lt;p>We also explain Kubernetes community, SIGs, and Working Groups.
Next, we explain the roles and responsibilities of Member, Reviewer, Approver, Tech Lead, and Chair.
Additionally, we introduce the communication tools we primarily use, such as Slack, GitHub, and mailing lists.
Some Japanese speakers may feel that communicating in English is a barrier.
Additionally, those who are new to the community need to understand where and how communication takes place.
We emphasize the importance of taking that first step, which is the most important aspect we focus on in our training!&lt;/p>
&lt;p>We then go over the structure of Kubernetes codebase, the main repositories, how to create a PR, and the CI/CD process using &lt;a href="https://docs.prow.k8s.io/">Prow&lt;/a>.
We explain in detail the process from creating a PR to getting it merged.&lt;/p>
&lt;p>After several lectures, participants get to experience hands-on work using &lt;a href="https://github.com/kubernetes-sigs/contributor-playground">kubernetes-sigs/contributor-playground&lt;/a>, where they can create a simple PR.
The goal is for participants to get a feel for the process of contributing to Kubernetes.&lt;/p>
&lt;p>At the end of the program, we also provide a detailed explanation of setting up the development environment for contributing to the &lt;code>kubernetes/kubernetes&lt;/code> repository,
including building code locally, running tests efficiently, and setting up clusters.&lt;/p>
&lt;h2 id="interview-with-participants">Interview with participants&lt;/h2>
&lt;p>We conducted interviews with those who participated in our training program.
We asked them about their reasons for joining, their impressions, and their future goals.&lt;/p>
&lt;h3 id="keita-mochizukihttpsgithubcommochizuki875-ntt-data-group-corporationhttpswwwnttdatacomglobalenabout-usprofile">&lt;a href="https://github.com/mochizuki875">Keita Mochizuki&lt;/a> (&lt;a href="https://www.nttdata.com/global/en/about-us/profile">NTT DATA Group Corporation&lt;/a>)&lt;/h3>
&lt;p>Keita Mochizuki is a contributor who consistently contributes to Kubernetes and related projects.
Keita is also a professional in container security and has recently published a book.
Additionally, he has made available a &lt;a href="https://github.com/mochizuki875/KubernetesFirstContributionRoadMap">Roadmap for New Contributors&lt;/a>, which is highly beneficial for those new to contributing.&lt;/p>
&lt;p>&lt;strong>Junya:&lt;/strong> Why did you decide to participate in Kubernetes Upstream Training?&lt;/p>
&lt;p>&lt;strong>Keita:&lt;/strong> Actually, I participated twice, in 2020 and 2022.
In 2020, I had just started learning about Kubernetes and wanted to try getting involved in activities outside of work, so I signed up after seeing the event on Twitter by chance.
However, I didn&amp;rsquo;t have much knowledge at the time, and contributing to OSS felt like something beyond my reach.
As a result, my understanding after the training was shallow, and I left with more of a &amp;ldquo;hmm, okay&amp;rdquo; feeling.&lt;/p>
&lt;p>In 2022, I participated again when I was at a stage where I was seriously considering starting contributions.
This time, I did prior research and was able to resolve my questions during the lectures, making it a very productive experience.&lt;/p>
&lt;p>&lt;strong>Junya:&lt;/strong> How did you feel after participating?&lt;/p>
&lt;p>&lt;strong>Keita:&lt;/strong> I felt that the significance of this training greatly depends on the participant&amp;rsquo;s mindset.
The training itself consists of general explanations and simple hands-on exercises, but it doesn&amp;rsquo;t mean that attending the training will immediately lead to contributions.&lt;/p>
&lt;p>&lt;strong>Junya:&lt;/strong> What is your purpose for contributing?&lt;/p>
&lt;p>&lt;strong>Keita:&lt;/strong> My initial motivation was to &amp;ldquo;gain a deep understanding of Kubernetes and build a track record,&amp;rdquo; meaning &amp;ldquo;contributing itself was the goal.&amp;rdquo;
Nowadays, I also contribute to address bugs or constraints I discover during my work.
Additionally, through contributing, I&amp;rsquo;ve become less hesitant to analyze undocumented features directly from the source code.&lt;/p>
&lt;p>&lt;strong>Junya:&lt;/strong> What has been challenging about contributing?&lt;/p>
&lt;p>&lt;strong>Keita:&lt;/strong> The most difficult part was taking the first step. Contributing to OSS requires a certain level of knowledge, and leveraging resources like this training and support from others was essential.
One phrase that stuck with me was, &amp;ldquo;Once you take the first step, it becomes easier to move forward.&amp;rdquo;
Also, in terms of continuing contributions as part of my job, the most challenging aspect is presenting the outcomes as achievements.
To keep contributing over time, it&amp;rsquo;s important to align it with business goals and strategies, but upstream contributions don&amp;rsquo;t always lead to immediate results that can be directly tied to performance.
Therefore, it&amp;rsquo;s crucial to ensure mutual understanding with managers and gain their support.&lt;/p>
&lt;p>&lt;strong>Junya:&lt;/strong> What are your future goals?&lt;/p>
&lt;p>&lt;strong>Keita:&lt;/strong> My goal is to contribute to areas with a larger impact.
So far, I&amp;rsquo;ve mainly contributed by fixing smaller bugs as my primary focus was building a track record,
but moving forward, I&amp;rsquo;d like to challenge myself with contributions that have a greater impact on Kubernetes users or that address issues related to my work.
Recently, I&amp;rsquo;ve also been working on reflecting the changes I&amp;rsquo;ve made to the codebase into the official documentation,
and I see this as a step toward achieving my goals.&lt;/p>
&lt;p>&lt;strong>Junya:&lt;/strong> Thank you very much!&lt;/p>
&lt;h3 id="yoshiki-fujikanehttpsgithubcomffjlabo-cyberagent-inchttpswwwcyberagentcojpen">&lt;a href="https://github.com/ffjlabo">Yoshiki Fujikane&lt;/a> (&lt;a href="https://www.cyberagent.co.jp/en/">CyberAgent, Inc.&lt;/a>)&lt;/h3>
&lt;p>Yoshiki Fujikane is one of the maintainers of &lt;a href="https://pipecd.dev/">PipeCD&lt;/a>, a CNCF Sandbox project.
In addition to developing new features for Kubernetes support in PipeCD,
Yoshiki actively participates in community management and speaks at various technical conferences.&lt;/p>
&lt;p>&lt;strong>Junya:&lt;/strong> Why did you decide to participate in the Kubernetes Upstream Training?&lt;/p>
&lt;p>&lt;strong>Yoshiki:&lt;/strong> At the time I participated, I was still a student.
I had only briefly worked with EKS, but I thought Kubernetes seemed complex yet cool, and I was casually interested in it.
Back then, OSS felt like something out of reach, and upstream development for Kubernetes seemed incredibly daunting.
While I had always been interested in OSS, I didn&amp;rsquo;t know where to start.
It was during this time that I learned about the Kubernetes Upstream Training and decided to take the challenge of contributing to Kubernetes.&lt;/p>
&lt;p>&lt;strong>Junya:&lt;/strong> What were your impressions after participating?&lt;/p>
&lt;p>&lt;strong>Yoshiki:&lt;/strong> I found it extremely valuable as a way to understand what it&amp;rsquo;s like to be part of an OSS community.
At the time, my English skills weren&amp;rsquo;t very strong, so accessing primary sources of information felt like a big hurdle for me.
Kubernetes is a very large project, and I didn&amp;rsquo;t have a clear understanding of the overall structure, let alone what was necessary for contributing.
The upstream training provided a Japanese explanation of the community structure and allowed me to gain hands-on experience with actual contributions.
Thanks to the guidance I received, I was able to learn how to approach primary sources and use them as entry points for further investigation, which was incredibly helpful.
This experience made me realize the importance of organizing and reviewing primary sources, and now I often dive into GitHub issues and documentation when something piques my interest.
As a result, while I am no longer contributing to Kubernetes itself, the experience has been a great foundation for contributing to other projects.&lt;/p>
&lt;p>&lt;strong>Junya:&lt;/strong> What areas are you currently contributing to, and what are the other projects you&amp;rsquo;re involved in?&lt;/p>
&lt;p>&lt;strong>Yoshiki:&lt;/strong> Right now, I&amp;rsquo;m no longer working with Kubernetes, but instead, I&amp;rsquo;m a maintainer of PipeCD, a CNCF Sandbox project.
PipeCD is a CD tool that supports GitOps-style deployments for various application platforms.
The tool originally started as an internal project at CyberAgent.
With different teams adopting different platforms, PipeCD was developed to provide a unified CD platform with a consistent user experience.
Currently, it supports Kubernetes, AWS ECS, Lambda, Cloud Run, and Terraform.&lt;/p>
&lt;p>&lt;strong>Junya:&lt;/strong> What role do you play within the PipeCD team?&lt;/p>
&lt;p>&lt;strong>Yoshiki:&lt;/strong> I work full-time on improving and developing Kubernetes-related features within the team.
Since we provide PipeCD as a SaaS internally, my main focus is on adding new features and improving existing ones as part of that support.
In addition to code contributions, I also contribute by giving talks at various events and managing community meetings to help grow the PipeCD community.&lt;/p>
&lt;p>&lt;strong>Junya:&lt;/strong> Could you explain what kind of improvements or developments you are working on with regards to Kubernetes?&lt;/p>
&lt;p>&lt;strong>Yoshiki:&lt;/strong> PipeCD supports GitOps and Progressive Delivery for Kubernetes, so I&amp;rsquo;m involved in the development of those features.
Recently, I&amp;rsquo;ve been working on features that streamline deployments across multiple clusters.&lt;/p>
&lt;p>&lt;strong>Junya:&lt;/strong> Have you encountered any challenges while contributing to OSS?&lt;/p>
&lt;p>&lt;strong>Yoshiki:&lt;/strong> One challenge is developing features that maintain generality while meeting user use cases.
When we receive feature requests while operating the internal SaaS, we first consider adding features to solve those issues.
At the same time, we want PipeCD to be used by a broader audience as an OSS tool.
So, I always think about whether a feature designed for one use case could be applied to another, ensuring the software remains flexible and widely usable.&lt;/p>
&lt;p>&lt;strong>Junya:&lt;/strong> What are your goals moving forward?&lt;/p>
&lt;p>&lt;strong>Yoshiki:&lt;/strong> I want to focus on expanding PipeCD&amp;rsquo;s functionality.
Currently, we are developing PipeCD under the slogan &amp;ldquo;One CD for All.&amp;rdquo;
As I mentioned earlier, it supports Kubernetes, AWS ECS, Lambda, Cloud Run, and Terraform, but there are many other platforms out there, and new platforms may emerge in the future.
For this reason, we are currently developing a plugin system that will allow users to extend PipeCD on their own, and I want to push this effort forward.
I&amp;rsquo;m also working on features for multi-cluster deployments in Kubernetes, and I aim to continue making impactful contributions.&lt;/p>
&lt;p>&lt;strong>Junya:&lt;/strong> Thank you very much!&lt;/p>
&lt;h2 id="future-of-kubernetes-upstream-training">Future of Kubernetes upstream training&lt;/h2>
&lt;p>We plan to continue hosting Kubernetes Upstream Training in Japan and look forward to welcoming many new contributors.
Our next session is scheduled to take place at the end of November during &lt;a href="https://event.cloudnativedays.jp/cndw2024">CloudNative Days Winter 2024&lt;/a>.&lt;/p>
&lt;p>Moreover, our goal is to expand these training programs not only in Japan but also around the world.
&lt;a href="https://kubernetes.io/blog/2024/06/06/10-years-of-kubernetes/">Kubernetes celebrated its 10th anniversary&lt;/a> this year, and for the community to become even more active, it&amp;rsquo;s crucial for people across the globe to continue contributing.
While Upstream Training is already held in several regions, we aim to bring it to even more places.&lt;/p>
&lt;p>We hope that as more people join Kubernetes community and contribute, our community will become even more vibrant!&lt;/p></description></item><item><title>Blog: Announcing the 2024 Steering Committee Election Results</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/10/02/steering-committee-results-2024/</link><pubDate>Wed, 02 Oct 2024 15:10:00 -0500</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/10/02/steering-committee-results-2024/</guid><description>
&lt;p>The &lt;a href="https://github.com/kubernetes/community/tree/master/elections/steering/2024">2024 Steering Committee Election&lt;/a> is now complete. The Kubernetes Steering Committee consists of 7 seats, 3 of which were up for election in 2024. Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community.&lt;/p>
&lt;p>This community body is significant since it oversees the governance of the entire Kubernetes project. With that great power comes great responsibility. You can learn more about the steering committee’s role in their &lt;a href="https://github.com/kubernetes/steering/blob/master/charter.md">charter&lt;/a>.&lt;/p>
&lt;p>Thank you to everyone who voted in the election; your participation helps support the community’s continued health and success.&lt;/p>
&lt;h2 id="results">Results&lt;/h2>
&lt;p>Congratulations to the elected committee members whose two year terms begin immediately (listed in alphabetical order by GitHub handle):&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Antonio Ojea (&lt;a href="https://github.com/aojea">@aojea&lt;/a>), Google&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Benjamin Elder (&lt;a href="https://github.com/bentheelder">@BenTheElder&lt;/a>), Google&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Sascha Grunert (&lt;a href="https://github.com/saschagrunert">@saschagrunert&lt;/a>), Red Hat&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>They join continuing members:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Stephen Augustus (&lt;a href="https://github.com/justaugustus">@justaugustus&lt;/a>), Cisco&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Paco Xu 徐俊杰 (&lt;a href="https://github.com/pacoxu">@pacoxu&lt;/a>), DaoCloud&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Patrick Ohly (&lt;a href="https://github.com/pohly">@pohly&lt;/a>), Intel&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Maciej Szulik (&lt;a href="https://github.com/soltysh">@soltysh&lt;/a>), Defense Unicorns&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>Benjamin Elder is a returning Steering Committee Member.&lt;/p>
&lt;h2 id="big-thanks">Big thanks!&lt;/h2>
&lt;p>Thank you and congratulations on a successful election to this round’s election officers:&lt;/p>
&lt;ul>
&lt;li>Bridget Kromhout (&lt;a href="https://github.com/bridgetkromhout">@bridgetkromhout&lt;/a>)&lt;/li>
&lt;li>Christoph Blecker (&lt;a href="https://github.com/cblecker">@cblecker&lt;/a>)&lt;/li>
&lt;li>Priyanka Saggu (&lt;a href="https://github.com/Priyankasaggu11929">@Priyankasaggu11929&lt;/a>)&lt;/li>
&lt;/ul>
&lt;p>Thanks to the Emeritus Steering Committee Members. Your service is appreciated by the community:&lt;/p>
&lt;ul>
&lt;li>Bob Killen (&lt;a href="https://github.com/mrbobbytables">@mrbobbytables&lt;/a>)&lt;/li>
&lt;li>Nabarun Pal (&lt;a href="https://github.com/palnabarun">@palnabarun&lt;/a>)&lt;/li>
&lt;/ul>
&lt;p>And thank you to all the candidates who came forward to run for election.&lt;/p>
&lt;h2 id="get-involved-with-the-steering-committee">Get involved with the Steering Committee&lt;/h2>
&lt;p>This governing body, like all of Kubernetes, is open to all. You can follow along with Steering Committee &lt;a href="https://bit.ly/k8s-steering-wd">meeting notes&lt;/a> and weigh in by filing an issue or creating a PR against their &lt;a href="https://github.com/kubernetes/steering">repo&lt;/a>. They have an open meeting on &lt;a href="https://github.com/kubernetes/steering">the first Monday at 8am PT of every month&lt;/a>. They can also be contacted at their public mailing list &lt;a href="mailto:steering@kubernetes.io">steering@kubernetes.io&lt;/a>.&lt;/p>
&lt;p>You can see what the Steering Committee meetings are all about by watching past meetings on the &lt;a href="https://www.youtube.com/playlist?list=PL69nYSiGNLP1yP1B_nd9-drjoxp0Q14qM">YouTube Playlist&lt;/a>.&lt;/p>
&lt;p>If you want to meet some of the newly elected Steering Committee members, join us for the &lt;a href="https://www.kubernetes.dev/events/2024/kcsna/schedule/#steering-ama">Steering AMA&lt;/a> at the Kubernetes Contributor Summit North America 2024 in Salt Lake City.&lt;/p>
&lt;hr>
&lt;p>&lt;em>This post was adapted from one written by the &lt;a href="https://github.com/kubernetes/community/tree/master/communication/contributor-comms">Contributor Comms Subproject&lt;/a>. If you want to write stories about the Kubernetes community, learn more about us.&lt;/em>&lt;/p></description></item><item><title>Blog: Spotlight on CNCF Deaf and Hard-of-hearing Working Group (DHHWG)</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/09/30/cncf-deaf-and-hard-of-hearing-working-group-spotlight/</link><pubDate>Mon, 30 Sep 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/09/30/cncf-deaf-and-hard-of-hearing-working-group-spotlight/</guid><description>
&lt;p>&lt;em>In recognition of Deaf Awareness Month and the importance of inclusivity in the tech community, we are spotlighting &lt;a href="https://www.linkedin.com/in/catherinepaganini/">Catherine Paganini&lt;/a>, facilitator and one of the founding members of &lt;a href="https://contribute.cncf.io/about/deaf-and-hard-of-hearing/">CNCF Deaf and Hard-of-Hearing Working Group&lt;/a> (DHHWG). In this interview, &lt;a href="https://www.linkedin.com/in/sandeepkanabar/">Sandeep Kanabar&lt;/a>, a deaf member of the DHHWG and part of the Kubernetes &lt;a href="https://github.com/kubernetes/community/blob/master/sig-contributor-experience/README.md#contributor-comms">SIG ContribEx Communications team&lt;/a>, sits down with Catherine to explore the impact of the DHHWG on cloud native projects like Kubernetes.&lt;/em>&lt;/p>
&lt;p>&lt;em>Sandeep’s journey is a testament to the power of inclusion. Through his involvement in the DHHWG, he connected with members of the Kubernetes community who encouraged him to join &lt;a href="https://github.com/kubernetes/community/blob/master/sig-contributor-experience/README.md">SIG ContribEx&lt;/a> - the group responsible for sustaining the Kubernetes contributor experience. In an ecosystem where open-source projects are actively seeking contributors and maintainers, this story highlights how important it is to create pathways for underrepresented groups, including those with disabilities, to contribute their unique perspectives and skills.&lt;/em>&lt;/p>
&lt;p>&lt;em>In this interview, we delve into Catherine’s journey, the challenges and triumphs of establishing the DHHWG, and the vision for a more inclusive future in cloud native. We invite Kubernetes contributors, maintainers, and community members to reflect on the &lt;strong>significance of empathy, advocacy, and community&lt;/strong> in fostering a truly inclusive environment for all, and to think about how they can support efforts to increase diversity and accessibility within their own projects.&lt;/em>&lt;/p>
&lt;h2 id="introduction">Introduction&lt;/h2>
&lt;p>&lt;strong>Sandeep Kanabar (SK): Hello Catherine, could you please introduce yourself, share your professional background, and explain your connection to the Kubernetes ecosystem?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Catherine Paganini (CP)&lt;/strong>: I&amp;rsquo;m the Head of Marketing at &lt;a href="https://buoyant.io/">Buoyant&lt;/a>, the creator of &lt;a href="https://linkerd.io/">Linkerd&lt;/a>, the CNCF-graduated service mesh, and 5th CNCF project. Four years ago, I started contributing to open source. The initial motivation was to make cloud native concepts more accessible to newbies and non-technical people. Without a technical background, it was hard for me to understand what Kubernetes, containers, service meshes, etc. mean. All content was targeted at engineers already familiar with foundational concepts. Clearly, I couldn&amp;rsquo;t be the only one struggling with wrapping my head around cloud native.&lt;/p>
&lt;p>My first contribution was the &lt;a href="https://landscape.cncf.io/guide#introduction">CNCF Landscape Guide&lt;/a>, which I co-authored with my former colleague Jason Morgan. Next, we started the &lt;a href="https://glossary.cncf.io/">CNCF Glossary&lt;/a>, which explains cloud native concepts in simple terms. Today, the glossary has been (partially) localised into 14 languages!&lt;/p>
&lt;p>Currently, I&amp;rsquo;m the co-chair of the &lt;a href="https://contribute.cncf.io/about/">TAG Contributor Strategy&lt;/a> and the Facilitator of the Deaf and Hard of Hearing Working Group (DHHWG) and Blind and Visually Impaired WG (BVIWG), which is still in formation. I&amp;rsquo;m also working on a new Linux Foundation (LF) initiative called ABIDE (Accessibility and Belonging through Inclusion, Diversity, and Equity), so stay tuned to learn more about it!&lt;/p>
&lt;h2 id="motivation-and-early-milestones">Motivation and early milestones&lt;/h2>
&lt;p>&lt;strong>SK: That&amp;rsquo;s inspiring! Building on your passion for accessibility, what motivated you to facilitate the creation of the DHHWG? Was there a speecifc moment or experience that sparked this initiative?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>CP&lt;/strong>: Last year at KubeCon Amsterdam, I learned about a great initiative by Jay Tihema that creates &lt;a href="https://contribute.cncf.io/resources/videos/2023/from-maori-to-deaf-engineers/">pathways for Maori youth into cloud native&lt;/a> and open source. While telling my CODA (children of deaf adults) high school friend about it, I thought it&amp;rsquo;d be great to create something similar for deaf folks. A few months later, I posted about it in a LinkedIn post that the CNCF shared. Deaf people started to reach out, wanting to participate. And the rest is history.&lt;/p>
&lt;p>&lt;strong>SK: Speaking of history, since its launch, how has the DHHWG evolved? Could you highlight some of the key milestones or achievements the group has reached recently?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>CP&lt;/strong>: Our WG is about a year old. It started with a few deaf engineers and me brainstorming how to make KubeCon more accessible. We published an initial draft of &lt;a href="https://contribute.cncf.io/accessibility/deaf-and-hard-of-hearing/conference-best-practices/">Best practices for an inclusive conference&lt;/a> and shared it with the LF events team. KubeCon Chicago was two months later, and we had a couple of deaf attendees. It was the &lt;strong>first&lt;/strong> KubeCon accessible to deaf signers. &lt;a href="https://www.linkedin.com/in/destiny-o-connor-28b2a5255/">Destiny&lt;/a>, one of our co-chairs, even participated in a &lt;a href="https://youtu.be/3WJ_s4Jvbsk?si=iscthTiCyMxoMUqY&amp;t=347">keynote panel&lt;/a>. It was incredible how quickly everything happened!&lt;/p>
&lt;p>&lt;img alt="DHHWG members at KubeCon Chicago" src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/cncf-dhhwg/cncf-dhhwg-chicago.jpg">
&lt;em>DHHWG members at KubeCon Chicago&lt;/em>&lt;/p>
&lt;p>The team has grown since then, and we&amp;rsquo;ve been able to do much more. With a kiosk in the project pavilion, an open space discussion, a sign language crash course, and a few media interviews, KubeCon Paris had a stronger advocacy and outreach focus. &lt;a href="https://www.youtube.com/watch?v=E8AcyqsgAyQ">Check out this video of our team in Paris&lt;/a> to get a glimpse of all the different KubeCon activities — it was such a great event! The team also launched the first CNCF Community Group in sign language, &lt;a href="https://community.cncf.io/deaf-in-cloud-native/">Deaf in Cloud Native&lt;/a>, a glossary team that creates sign language videos for each technical term to help standardize technical signs across the globe. It&amp;rsquo;s crazy to think that it all happened within one year!&lt;/p>
&lt;h2 id="overcoming-challenges-and-addressing-misconceptions">Overcoming challenges and addressing misconceptions&lt;/h2>
&lt;p>&lt;strong>SK: That&amp;rsquo;s remarkable progress in just a year! Building such momentum must have come with its challenges. What barriers have you encountered in facilitating the DHHWG, and how did you and the group work to overcome them?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>CP&lt;/strong>: The support from the community, LF, and CNCF has been incredible. The fact that we achieved so much is proof of it. The challenges are more in helping some team members overcome their fear of contributing. Most are new to open source, and it can be intimidating to put your work out there for everyone to see. The fear of being criticized in public is real; however, as they will hopefully realize over time, our community is incredibly supportive. Instead of criticizing, people tend to help improve the work, leading to better outcomes.&lt;/p>
&lt;p>&lt;strong>SK: Are there any misconceptions about the deaf and hard-of-hearing community in tech that you&amp;rsquo;d like to address?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>CP&lt;/strong>: Deaf and hard of hearing individuals are very diverse — there is no one-size-fits-all. Some deaf people are oral (speak), others sign, while some lip read or prefer captions. It generally depends on how people grew up. While some people come from deaf families and sign language is their native language, others were born into hearing families who may or may not have learned how to sign. Some deaf people grew up surrounded by hearing people, while others grew up deeply embedded in Deaf culture. Hard-of-hearing individuals, on the other hand, typically can communicate well with hearing peers one-on-one in quiet settings, but loud environments or conversations with multiple people can make it hard to follow the conversation. Most rely heavily on captions. Each background and experience will shape their communication style and preferences. In short, what works for one person, doesn&amp;rsquo;t necessarily work for others. So &lt;strong>never assume&lt;/strong> and &lt;strong>always ask&lt;/strong> about accessibility needs and preferences.&lt;/p>
&lt;h2 id="impact-and-the-role-of-allies">Impact and the role of allies&lt;/h2>
&lt;p>&lt;strong>SK: Can you share some key impacts/outcomes of the conference best practices document?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>CP&lt;/strong>: Here are the two most important ones: Captions should be on the monitor, not in an app. That&amp;rsquo;s especially important during technical talks with live demos. Deaf and hard of hearing attendees will miss important information switching between captions on their phone and code on the screen.&lt;/p>
&lt;p>Interpreters are most valuable during networking, not in talks (with captions). Most people come to conferences for the hallway track. That is no different for deaf attendees. If they can&amp;rsquo;t network, they are missing out on key professional connections, affecting their career prospects.&lt;/p>
&lt;p>&lt;strong>SK: In your view, how crucial is the role of allies within the DHHWG, and what contributions have they made to the group’s success?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>CP&lt;/strong>: Deaf and hard of hearing individuals are a minority and can only do so much. &lt;em>&lt;strong>Allies are the key to any diversity and inclusion initiative&lt;/strong>&lt;/em>. As a majority, allies can help spread the word and educate their peers, playing a key role in scaling advocacy efforts. They also have the power to demand change. It&amp;rsquo;s easy for companies to ignore minorities, but if the majority demands that their employers be accessible, environmentally conscious, and good citizens, they will ultimately be pushed to adapt to new societal values.&lt;/p>
&lt;h2 id="expanding-dei-efforts-and-future-vision">Expanding DEI efforts and future vision&lt;/h2>
&lt;p>&lt;strong>SK: The importance of allies in driving change is clear. Beyond the DHHWG, are you involved in any other DEI groups or initiatives within the tech community?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>CP&lt;/strong>: As mentioned above, I&amp;rsquo;m working on an initiative called ABIDE, which is still work in progress. I don&amp;rsquo;t want to share too much about it yet, but what I can say is that the DHHWG will be part of it and that we just started a Blind and Visually Impaired WG (BVIWG). ABIDE will start by focusing on accessibility, so if anyone reading this has an idea for another WG, please reach out to me via the CNCF Slack @Catherine Paganini.&lt;/p>
&lt;p>&lt;strong>SK: What does the future hold for the DHHWG? Can you share details about any ongoing or upcoming initiatives?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>CP&lt;/strong>: I think we&amp;rsquo;ve been very successful in terms of visibility and awareness so far. We can&amp;rsquo;t stop, though. Awareness work is ongoing, and most people in our community haven&amp;rsquo;t heard about us or met anyone on our team yet, so a lot of work still lies ahead.&lt;/p>
&lt;p>&lt;img alt="DHHWG members at KubeCon Paris" src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/cncf-dhhwg/cncf-dhhwg-paris.jpg">
&lt;em>DHHWG members at KubeCon Paris&lt;/em>&lt;/p>
&lt;p>The next step is to refocus on advocacy. The same thing we did with the conference best practices but for other areas. The goal is to help educate the community about what real accessibility looks like, how projects can be more accessible, and why employers should seriously consider deaf candidates while providing them with the tools they need to conduct successful interviews and employee onboarding. We need to capture all that in documents, publish it, and then get the word out. That last part is certainly the most challenging, but it&amp;rsquo;s also where everyone can get involved.&lt;/p>
&lt;h2 id="call-to-action">Call to action&lt;/h2>
&lt;p>&lt;strong>SK: Thank you for sharing your insights, Catherine. As we wrap up, do you have any final thoughts or a call to action for our readers?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>CP&lt;/strong>: As we build our &lt;a href="https://contribute.cncf.io/accessibility/deaf-and-hard-of-hearing/">accessibility page&lt;/a>, check in regularly to see what&amp;rsquo;s new. Share the docs with your team, employer, and network — anyone, really. The more people understand what accessibility really means and why it matters, the more people will recognize when something isn&amp;rsquo;t accessible, and be able to call out marketing-BS, which, unfortunately, is more often the case than not. We need allies to help push for change. &lt;strong>No minority can do this on their own&lt;/strong>. So please learn about accessibility, keep an eye out for it, and call it out when something isn&amp;rsquo;t accessible. We need your help!&lt;/p>
&lt;h2 id="wrapping-up">Wrapping up&lt;/h2>
&lt;p>Catherine and the DHHWG&amp;rsquo;s work exemplify the power of community and advocacy. As we celebrate Deaf Awareness Month, let&amp;rsquo;s reflect on her role as an ally and consider how we can all contribute to building a more inclusive tech community, particularly within open-source projects like Kubernetes.&lt;/p>
&lt;p>&lt;em>Together, we can break down barriers, challenge misconceptions, and ensure that everyone feels welcome and valued. By advocating for accessibility, supporting initiatives like the DHHWG, and fostering a culture of empathy, we can create a truly inclusive and welcoming space for all.&lt;/em>&lt;/p></description></item><item><title>Blog: Spotlight on SIG Scheduling</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/09/24/sig-scheduling-spotlight-2024/</link><pubDate>Tue, 24 Sep 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/09/24/sig-scheduling-spotlight-2024/</guid><description>
&lt;p>In this SIG Scheduling spotlight we talked with &lt;a href="https://github.com/sanposhiho/">Kensei Nakada&lt;/a>, an
approver in SIG Scheduling.&lt;/p>
&lt;h2 id="introductions">Introductions&lt;/h2>
&lt;p>&lt;strong>Arvind:&lt;/strong> &lt;strong>Hello, thank you for the opportunity to learn more about SIG Scheduling! Would you
like to introduce yourself and tell us a bit about your role, and how you got involved with
Kubernetes?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Kensei&lt;/strong>: Hi, thanks for the opportunity! I’m Kensei Nakada
(&lt;a href="https://github.com/sanposhiho/">@sanposhiho&lt;/a>), a software engineer at
&lt;a href="https://tetrate.io/">Tetrate.io&lt;/a>. I have been contributing to Kubernetes in my free time for more
than 3 years, and now I’m an approver of SIG Scheduling in Kubernetes. Also, I’m a founder/owner of
two SIG subprojects,
&lt;a href="https://github.com/kubernetes-sigs/kube-scheduler-simulator">kube-scheduler-simulator&lt;/a> and
&lt;a href="https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension">kube-scheduler-wasm-extension&lt;/a>.&lt;/p>
&lt;h2 id="about-sig-scheduling">About SIG Scheduling&lt;/h2>
&lt;p>&lt;strong>AP: That&amp;rsquo;s awesome! You&amp;rsquo;ve been involved with the project since a long time. Can you provide a
brief overview of SIG Scheduling and explain its role within the Kubernetes ecosystem?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>KN&lt;/strong>: As the name implies, our responsibility is to enhance scheduling within
Kubernetes. Specifically, we develop the components that determine which Node is the best place for
each Pod. In Kubernetes, our main focus is on maintaining the
&lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/">kube-scheduler&lt;/a>, along
with other scheduling-related components as part of our SIG subprojects.&lt;/p>
&lt;p>&lt;strong>AP: I see, got it! That makes me curious&amp;ndash;what recent innovations or developments has SIG
Scheduling introduced to Kubernetes scheduling?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>KN&lt;/strong>: From a feature perspective, there have been
&lt;a href="https://kubernetes.io/blog/2023/04/17/fine-grained-pod-topology-spread-features-beta/">several enhancements&lt;/a>
to &lt;code>PodTopologySpread&lt;/code> recently. &lt;code>PodTopologySpread&lt;/code> is a relatively new feature in the scheduler,
and we are still in the process of gathering feedback and making improvements.&lt;/p>
&lt;p>Most recently, we have been focusing on a new internal enhancement called
&lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/4247-queueinghint/README.md">QueueingHint&lt;/a>
which aims to enhance scheduling throughput. Throughput is one of our crucial metrics in
scheduling. Traditionally, we have primarily focused on optimizing the latency of each scheduling
cycle. QueueingHint takes a different approach, optimizing when to retry scheduling, thereby
reducing the likelihood of wasting scheduling cycles.&lt;/p>
&lt;p>&lt;strong>A: That sounds interesting! Are there any other interesting topics or projects you are currently
working on within SIG Scheduling?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>KN&lt;/strong>: I’m leading the development of &lt;code>QueueingHint&lt;/code> which I just shared. Given that it’s a big new
challenge for us, we’ve been facing many unexpected challenges, especially around the scalability,
and we’re trying to solve each of them to eventually enable it by default.&lt;/p>
&lt;p>And also, I believe
&lt;a href="https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension">kube-scheduler-wasm-extension&lt;/a>
(a SIG subproject) that I started last year would be interesting to many people. Kubernetes has
various extensions from many components. Traditionally, extensions are provided via webhooks
(&lt;a href="https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/scheduler_extender.md">extender&lt;/a>
in the scheduler) or Go SDK
(&lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/">Scheduling Framework&lt;/a>
in the scheduler). However, these come with drawbacks - performance issues with webhooks and the need to
rebuild and replace schedulers with Go SDK, posing difficulties for those seeking to extend the
scheduler but lacking familiarity with it. The project is trying to introduce a new solution to
this general challenge - a &lt;a href="https://webassembly.org/">WebAssembly&lt;/a> based extension. Wasm allows
users to build plugins easily, without worrying about recompiling or replacing their scheduler, and
sidestepping performance concerns.&lt;/p>
&lt;p>Through this project, SIG Scheduling has been learning valuable insights about WebAssembly&amp;rsquo;s
interaction with large Kubernetes objects. And I believe the experience that we’re gaining should be
useful broadly within the community, beyond SIG Scheduling.&lt;/p>
&lt;p>&lt;strong>A: Definitely! Now, there are 8 subprojects inside SIG Scheduling. Would you like to
talk about them? Are there some interesting contributions by those teams you want to highlight?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>KN&lt;/strong>: Let me pick up three subprojects: Kueue, KWOK and descheduler.&lt;/p>
&lt;dl>
&lt;dt>&lt;a href="https://github.com/kubernetes-sigs/kueue">Kueue&lt;/a>&lt;/dt>
&lt;dd>Recently, many people have been trying to manage batch workloads with Kubernetes, and in 2022,
Kubernetes community founded
&lt;a href="https://github.com/kubernetes/community/blob/master/wg-batch/README.md">WG-Batch&lt;/a> for better
support for such batch workloads in Kubernetes. &lt;a href="https://github.com/kubernetes-sigs/kueue">Kueue&lt;/a>
is a project that takes a crucial role for it. It’s a job queueing controller, deciding when a job
should wait, when a job should be admitted to start, and when a job should be preempted. Kueue aims
to be installed on a vanilla Kubernetes cluster while cooperating with existing matured controllers
(scheduler, cluster-autoscaler, kube-controller-manager, etc).&lt;/dd>
&lt;dt>&lt;a href="https://github.com/kubernetes-sigs/kwok">KWOK&lt;/a>&lt;/dt>
&lt;dd>KWOK is a component in which you can create a cluster of thousands of Nodes in seconds. It’s
mostly useful for simulation/testing as a lightweight cluster, and actually another SIG sub
project &lt;a href="https://github.com/kubernetes-sigs/kube-scheduler-simulator">kube-scheduler-simulator&lt;/a>
uses KWOK background.&lt;/dd>
&lt;dt>&lt;a href="https://github.com/kubernetes-sigs/descheduler">descheduler&lt;/a>&lt;/dt>
&lt;dd>Descheduler is a component recreating pods that are running on undesired Nodes. In Kubernetes,
scheduling constraints (&lt;code>PodAffinity&lt;/code>, &lt;code>NodeAffinity&lt;/code>, &lt;code>PodTopologySpread&lt;/code>, etc) are honored only at
Pod schedule, but it’s not guaranteed that the contrtaints are kept being satisfied afterwards.
Descheduler evicts Pods violating their scheduling constraints (or other undesired conditions) so
that they’re recreated and rescheduled.&lt;/dd>
&lt;dt>&lt;a href="https://github.com/kubernetes-sigs/descheduler/blob/master/keps/753-descheduling-framework/README.md">Descheduling Framework&lt;/a>&lt;/dt>
&lt;dd>One very interesting on-going project, similar to
&lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/">Scheduling Framework&lt;/a>
in the scheduler, aiming to make descheduling logic extensible and allow maintainers to focus on building
a core engine of descheduler.&lt;/dd>
&lt;/dl>
&lt;p>&lt;strong>AP: Thank you for letting us know! And I have to ask, what are some of your favorite things about
this SIG?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>KN&lt;/strong>: What I really like about this SIG is how actively engaged everyone is. We come from various
companies and industries, bringing diverse perspectives to the table. Instead of these differences
causing division, they actually generate a wealth of opinions. Each view is respected, and this
makes our discussions both rich and productive.&lt;/p>
&lt;p>I really appreciate this collaborative atmosphere, and I believe it has been key to continuously
improving our components over the years.&lt;/p>
&lt;h2 id="contributing-to-sig-scheduling">Contributing to SIG Scheduling&lt;/h2>
&lt;p>&lt;strong>AP: Kubernetes is a community-driven project. Any recommendations for new contributors or
beginners looking to get involved and contribute to SIG scheduling? Where should they start?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>KN&lt;/strong>: Let me start with a general recommendation for contributing to any SIG: a common approach is to look for
&lt;a href="https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22">good-first-issue&lt;/a>.
However, you&amp;rsquo;ll soon realize that many people worldwide are trying to contribute to the Kubernetes
repository.&lt;/p>
&lt;p>I suggest starting by examining the implementation of a component that interests you. If you have
any questions about it, ask in the corresponding Slack channel (e.g., #sig-scheduling for the
scheduler, #sig-node for kubelet, etc). Once you have a rough understanding of the implementation,
look at issues within the SIG (e.g.,
&lt;a href="https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Asig%2Fscheduling">sig-scheduling&lt;/a>),
where you&amp;rsquo;ll find more unassigned issues compared to good-first-issue ones. You may also want to
filter issues with the
&lt;a href="https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue++label%3Akind%2Fcleanup+">kind/cleanup&lt;/a>
label, which often indicates lower-priority tasks and can be starting points.&lt;/p>
&lt;p>Specifically for SIG Scheduling, you should first understand the
&lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/">Scheduling Framework&lt;/a>, which is
the fundamental architecture of kube-scheduler. Most of the implementation is found in
&lt;a href="https://github.com/kubernetes/kubernetes/tree/master/pkg/scheduler">pkg/scheduler&lt;/a>. I suggest starting with
&lt;a href="https://github.com/kubernetes/kubernetes/blob/0590bb1ac495ae8af2a573f879408e48800da2c5/pkg/scheduler/schedule_one.go#L66">ScheduleOne&lt;/a>
function and then exploring deeper from there.&lt;/p>
&lt;p>Additionally, apart from the main kubernetes/kubernetes repository, consider looking into
sub-projects. These typically have fewer maintainers and offer more opportunities to make a
significant impact. Despite being called &amp;ldquo;sub&amp;rdquo; projects, many have a large number of users and a
considerable impact on the community.&lt;/p>
&lt;p>And last but not least, remember contributing to the community isn’t just about code. While I
talked a lot about the implementation contribution, there are many ways to contribute, and each one
is valuable. One comment to an issue, one feedback to an existing feature, one review comment in PR,
one clarification on the documentation; every small contribution helps drive the Kubernetes
ecosystem forward.&lt;/p>
&lt;p>&lt;strong>AP: Those are some pretty useful tips! And if I may ask, how do you assist new contributors in
getting started, and what skills are contributors likely to learn by participating in SIG Scheduling?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>KN&lt;/strong>: Our maintainers are available to answer your questions in the #sig-scheduling Slack
channel. By participating, you&amp;rsquo;ll gain a deeper understanding of Kubernetes scheduling and have the
opportunity to collaborate and network with maintainers from diverse backgrounds. You&amp;rsquo;ll learn not
just how to write code, but also how to maintain a large project, design and discuss new features,
address bugs, and much more.&lt;/p>
&lt;h2 id="future-directions">Future Directions&lt;/h2>
&lt;p>&lt;strong>AP: What are some Kubernetes-specific challenges in terms of scheduling? Are there any particular
pain points?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>KN&lt;/strong>: Scheduling in Kubernetes can be quite challenging because of the diverse needs of different
organizations with different business requirements. Supporting all possible use cases in
kube-scheduler is impossible. Therefore, extensibility is a key focus for us. A few years ago, we
rearchitected kube-scheduler with
&lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/">Scheduling Framework&lt;/a>,
which offers flexible extensibility for users to implement various scheduling needs through plugins. This
allows maintainers to focus on the core scheduling features and the framework runtime.&lt;/p>
&lt;p>Another major issue is maintaining sufficient scheduling throughput. Typically, a Kubernetes cluster
has only one kube-scheduler, so its throughput directly affects the overall scheduling scalability
and, consequently, the cluster&amp;rsquo;s scalability. Although we have an internal performance test
(&lt;a href="https://github.com/kubernetes/kubernetes/tree/master/test/integration/scheduler_perf">scheduler_perf&lt;/a>),
unfortunately, we sometimes overlook performance degradation in less common scenarios. It’s
difficult as even small changes, which look irrelevant to performance, can lead to degradation.&lt;/p>
&lt;p>&lt;strong>AP: What are some upcoming goals or initiatives for SIG Scheduling? How do you envision the SIG evolving in the future?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>KN&lt;/strong>: Our primary goal is always to build and maintain &lt;em>extensible&lt;/em> and &lt;em>stable&lt;/em> scheduling
runtime, and I bet this goal will remain unchanged forever.&lt;/p>
&lt;p>As already mentioned, extensibility is key to solving the challenge of the diverse needs of
scheduling. Rather than trying to support every different use case directly in kube-scheduler, we
will continue to focus on enhancing extensibility so that it can accommodate various use
cases. &lt;a href="https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension">kube-scheduler-wasm-extension&lt;/a>
that I mentioned is also part of this initiative.&lt;/p>
&lt;p>Regarding stability, introducing new optimizations like QueueHint is one of our
strategies. Additionally, maintaining throughput is also a crucial goal towards the future. We’re
planning to enhance our throughput monitoring
(&lt;a href="https://github.com/kubernetes/kubernetes/issues/124774">ref&lt;/a>), so that we can notice degradation
as much as possible on our own before releasing. But, realistically, we can&amp;rsquo;t cover every possible
scenario. We highly appreciate any attention the community can give to scheduling throughput and
encourage feedback and alerts regarding performance issues!&lt;/p>
&lt;h2 id="closing-remarks">Closing Remarks&lt;/h2>
&lt;p>&lt;strong>AP: Finally, what message would you like to convey to those who are interested in learning more
about SIG Scheduling?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>KN&lt;/strong>: Scheduling is one of the most complicated areas in Kubernetes, and you may find it difficult
at first. But, as I shared earlier, you can find many opportunities for contributions, and many
maintainers are willing to help you understand things. We know your unique perspective and skills
are what makes our open source so powerful 😊&lt;/p>
&lt;p>Feel free to reach out to us in Slack
(&lt;a href="https://kubernetes.slack.com/archives/C09TP78DV">#sig-scheduling&lt;/a>) or
&lt;a href="https://github.com/kubernetes/community/blob/master/sig-scheduling/README.md#meetings">meetings&lt;/a>.
I hope this article interests everyone and we can see new contributors!&lt;/p>
&lt;p>&lt;strong>AP: Thank you so much for taking the time to do this! I&amp;rsquo;m confident that many will find this
information invaluable for understanding more about SIG Scheduling and for contributing to the SIG.&lt;/strong>&lt;/p></description></item><item><title>Blog: Spotlight on SIG API Machinery</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/08/07/sig-api-machinery-spotlight-2024/</link><pubDate>Wed, 07 Aug 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/08/07/sig-api-machinery-spotlight-2024/</guid><description>
&lt;p>We recently talked with &lt;a href="https://github.com/fedebongio">Federico Bongiovanni&lt;/a> (Google) and &lt;a href="https://github.com/deads2k">David
Eads&lt;/a> (Red Hat), Chairs of SIG API Machinery, to know a bit more about
this Kubernetes Special Interest Group.&lt;/p>
&lt;h2 id="introductions">Introductions&lt;/h2>
&lt;p>&lt;strong>Frederico (FSM): Hello, and thank your for your time. To start with, could you tell us about
yourselves and how you got involved in Kubernetes?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>David&lt;/strong>: I started working on
&lt;a href="https://www.redhat.com/en/technologies/cloud-computing/openshift">OpenShift&lt;/a> (the Red Hat
distribution of Kubernetes) in the fall of 2014 and got involved pretty quickly in API Machinery. My
first PRs were fixing kube-apiserver error messages and from there I branched out to &lt;code>kubectl&lt;/code>
(&lt;em>kubeconfigs&lt;/em> are my fault!), &lt;code>auth&lt;/code> (&lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/">RBAC&lt;/a> and &lt;code>*Review&lt;/code> APIs are ports
from OpenShift), &lt;code>apps&lt;/code> (&lt;em>workqueues&lt;/em> and &lt;em>sharedinformers&lt;/em> for example). Don’t tell the others,
but API Machinery is still my favorite :)&lt;/p>
&lt;p>&lt;strong>Federico&lt;/strong>: I was not as early in Kubernetes as David, but now it&amp;rsquo;s been more than six years. At
my previous company we were starting to use Kubernetes for our own products, and when I came across
the opportunity to work directly with Kubernetes I left everything and boarded the ship (no pun
intended). I joined Google and Kubernetes in early 2018, and have been involved since.&lt;/p>
&lt;h2 id="sig-machinerys-scope">SIG Machinery&amp;rsquo;s scope&lt;/h2>
&lt;p>&lt;strong>FSM: It only takes a quick look at the SIG API Machinery charter to see that it has quite a
significant scope, nothing less than the Kubernetes control plane. Could you describe this scope in
your own words?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>David&lt;/strong>: We own the &lt;code>kube-apiserver&lt;/code> and how to efficiently use it. On the backend, that includes
its contract with backend storage and how it allows API schema evolution over time. On the
frontend, that includes schema best practices, serialization, client patterns, and controller
patterns on top of all of it.&lt;/p>
&lt;p>&lt;strong>Federico&lt;/strong>: Kubernetes has a lot of different components, but the control plane has a really
critical mission: it&amp;rsquo;s your communication layer with the cluster and also owns all the extensibility
mechanisms that make Kubernetes so powerful. We can&amp;rsquo;t make mistakes like a regression, or an
incompatible change, because the blast radius is huge.&lt;/p>
&lt;p>&lt;strong>FSM: Given this breadth, how do you manage the different aspects of it?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Federico&lt;/strong>: We try to organize the large amount of work into smaller areas. The working groups and
subprojects are part of it. Different people on the SIG have their own areas of expertise, and if
everything fails, we are really lucky to have people like David, Joe, and Stefan who really are &amp;ldquo;all
terrain&amp;rdquo;, in a way that keeps impressing me even after all these years. But on the other hand this
is the reason why we need more people to help us carry the quality and excellence of Kubernetes from
release to release.&lt;/p>
&lt;h2 id="an-evolving-collaboration-model">An evolving collaboration model&lt;/h2>
&lt;p>&lt;strong>FSM: Was the existing model always like this, or did it evolve with time - and if so, what would
you consider the main changes and the reason behind them?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>David&lt;/strong>: API Machinery has evolved over time both growing and contracting in scope. When trying
to satisfy client access patterns it’s very easy to add scope both in terms of features and applying
them.&lt;/p>
&lt;p>A good example of growing scope is the way that we identified a need to reduce memory utilization by
clients writing controllers and developed shared informers. In developing shared informers and the
controller patterns use them (workqueues, error handling, and listers), we greatly reduced memory
utilization and eliminated many expensive lists. The downside: we grew a new set of capability to
support and effectively took ownership of that area from sig-apps.&lt;/p>
&lt;p>For an example of more shared ownership: building out cooperative resource management (the goal of
server-side apply), &lt;code>kubectl&lt;/code> expanded to take ownership of leveraging the server-side apply
capability. The transition isn’t yet complete, but &lt;a href="https://github.com/kubernetes/community/tree/master/sig-cli">SIG
CLI&lt;/a> manages that usage and owns it.&lt;/p>
&lt;p>&lt;strong>FSM: And for the boundary between approaches, do you have any guidelines?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>David&lt;/strong>: I think much depends on the impact. If the impact is local in immediate effect, we advise
other SIGs and let them move at their own pace. If the impact is global in immediate effect without
a natural incentive, we’ve found a need to press for adoption directly.&lt;/p>
&lt;p>&lt;strong>FSM: Still on that note, SIG Architecture has an API Governance subproject, is it mostly
independent from SIG API Machinery or are there important connection points?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>David&lt;/strong>: The projects have similar sounding names and carry some impacts on each other, but have
different missions and scopes. API Machinery owns the how and API Governance owns the what. API
conventions, the API approval process, and the final say on individual k8s.io APIs belong to API
Governance. API Machinery owns the REST semantics and non-API specific behaviors.&lt;/p>
&lt;p>&lt;strong>Federico&lt;/strong>: I really like how David put it: &lt;em>&amp;ldquo;API Machinery owns the how and API Governance owns
the what&amp;rdquo;&lt;/em>: we don&amp;rsquo;t own the actual APIs, but the actual APIs live through us.&lt;/p>
&lt;h2 id="the-challenges-of-kubernetes-popularity">The challenges of Kubernetes popularity&lt;/h2>
&lt;p>&lt;strong>FSM: With the growth in Kubernetes adoption we have certainly seen increased demands from the
Control Plane: how is this felt and how does it influence the work of the SIG?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>David&lt;/strong>: It’s had a massive influence on API Machinery. Over the years we have often responded to
and many times enabled the evolutionary stages of Kubernetes. As the central orchestration hub of
nearly all capability on Kubernetes clusters, we both lead and follow the community. In broad
strokes I see a few evolution stages for API Machinery over the years, with constantly high
activity.&lt;/p>
&lt;ol>
&lt;li>
&lt;p>&lt;strong>Finding purpose&lt;/strong>: &lt;code>pre-1.0&lt;/code> up until &lt;code>v1.3&lt;/code> (up to our first 1000+ nodes/namespaces) or
so. This time was characterized by rapid change. We went through five different versions of our
schemas and rose to meet the need. We optimized for quick, in-tree API evolution (sometimes to
the detriment of longer term goals), and defined patterns for the first time.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Scaling to meet the need&lt;/strong>: &lt;code>v1.3-1.9&lt;/code> (up to shared informers in controllers) or so. When we
started trying to meet customer needs as we gained adoption, we found severe scale limitations in
terms of CPU and memory. This was where we broadened API machinery to include access patterns, but
were still heavily focused on in-tree types. We built the watch cache, protobuf serialization,
and shared caches.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Fostering the ecosystem&lt;/strong>: &lt;code>v1.8-1.21&lt;/code> (up to CRD v1) or so. This was when we designed and wrote
CRDs (the considered replacement for third-party-resources), the immediate needs we knew were
coming (admission webhooks), and evolution to best practices we knew we needed (API schemas).
This enabled an explosion of early adopters willing to work very carefully within the constraints
to enable their use-cases for servicing pods. The adoption was very fast, sometimes outpacing
our capability, and creating new problems.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Simplifying deployments&lt;/strong>: &lt;code>v1.22+&lt;/code>. In the relatively recent past, we’ve been responding to
pressures or running kube clusters at scale with large numbers of sometimes-conflicting ecosystem
projects using our extensions mechanisms. Lots of effort is now going into making platform
extensions easier to write and safer to manage by people who don&amp;rsquo;t hold PhDs in kubernetes. This
started with things like server-side-apply and continues today with features like webhook match
conditions and validating admission policies.&lt;/p>
&lt;/li>
&lt;/ol>
&lt;p>Work in API Machinery has a broad impact across the project and the ecosystem. It’s an exciting
area to work for those able to make a significant time investment on a long time horizon.&lt;/p>
&lt;h2 id="the-road-ahead">The road ahead&lt;/h2>
&lt;p>&lt;strong>FSM: With those different evolutionary stages in mind, what would you pinpoint as the top
priorities for the SIG at this time?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>David:&lt;/strong> &lt;strong>Reliability, efficiency, and capability&lt;/strong> in roughly that order.&lt;/p>
&lt;p>With the increased usage of our &lt;code>kube-apiserver&lt;/code> and extensions mechanisms, we find that our first
set of extensions mechanisms, while fairly complete in terms of capability, carry significant risks
in terms of potential mis-use with large blast radius. To mitigate these risks, we’re investing in
features that reduce the blast radius for accidents (webhook match conditions) and which provide
alternative mechanisms with lower risk profiles for most actions (validating admission policy).&lt;/p>
&lt;p>At the same time, the increased usage has made us more aware of scaling limitations that we can
improve both server and client-side. Efforts here include more efficient serialization (CBOR),
reduced etcd load (consistent reads from cache), and reduced peak memory usage (streaming lists).&lt;/p>
&lt;p>And finally, the increased usage has highlighted some long existing
gaps that we’re closing. Things like field selectors for CRDs which
the &lt;a href="https://github.com/kubernetes/community/blob/master/wg-batch/README.md">Batch Working Group&lt;/a>
is eager to leverage and will eventually form the basis for a new way
to prevent trampoline pod attacks from exploited nodes.&lt;/p>
&lt;h2 id="joining-the-fun">Joining the fun&lt;/h2>
&lt;p>&lt;strong>FSM: For anyone wanting to start contributing, what&amp;rsquo;s your suggestions?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Federico&lt;/strong>: SIG API Machinery is not an exception to the Kubernetes motto: &lt;strong>Chop Wood and Carry
Water&lt;/strong>. There are multiple weekly meetings that are open to everybody, and there is always more
work to be done than people to do it.&lt;/p>
&lt;p>I acknowledge that API Machinery is not easy, and the ramp up will be steep. The bar is high,
because of the reasons we&amp;rsquo;ve been discussing: we carry a huge responsibility. But of course with
passion and perseverance many people has ramped up through the years, and we hope more will come.&lt;/p>
&lt;p>In terms of concrete opportunities, there is the SIG meeting every two weeks. Everyone is welcome to
attend and listen, see what the group talks about, see what&amp;rsquo;s going on in this release, etc.&lt;/p>
&lt;p>Also two times a week, Tuesday and Thursday, we have the public Bug Triage, where we go through
everything new from the last meeting. We&amp;rsquo;ve been keeping this practice for more than 7 years
now. It&amp;rsquo;s a great opportunity to volunteer to review code, fix bugs, improve documentation,
etc. Tuesday&amp;rsquo;s it&amp;rsquo;s at 1 PM (PST) and Thursday is on an EMEA friendly time (9:30 AM PST). We are
always looking to improve, and we hope to be able to provide more concrete opportunities to join and
participate in the future.&lt;/p>
&lt;p>&lt;strong>FSM: Excellent, thank you! Any final comments you would like to share with our readers?&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Federico&lt;/strong>: As I mentioned, the first steps might be hard, but the reward is also larger. Working
on API Machinery is working on an area of huge impact (millions of users?), and your contributions
will have a direct outcome in the way that Kubernetes works and the way that it&amp;rsquo;s used. For me
that&amp;rsquo;s enough reward and motivation!&lt;/p></description></item><item><title>Blog: Spotlight on SIG Node</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/06/20/sig-node-spotlight-2024/</link><pubDate>Thu, 20 Jun 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/06/20/sig-node-spotlight-2024/</guid><description>
&lt;p>In the world of container orchestration, &lt;a href="https://kubernetes.io/">Kubernetes&lt;/a> reigns
supreme, powering some of the most complex and dynamic applications across the globe. Behind the
scenes, a network of Special Interest Groups (SIGs) drives Kubernetes&amp;rsquo; innovation and stability.&lt;/p>
&lt;p>Today, I have the privilege of speaking with
&lt;a href="https://www.linkedin.com/in/matthias-bertschy-b427b815/">Matthias Bertschy&lt;/a>,
&lt;a href="https://www.linkedin.com/in/gunju-kim-916b33190/">Gunju Kim&lt;/a>, and
&lt;a href="https://www.linkedin.com/in/sergeykanzhelev/">Sergey Kanzhelev&lt;/a>, members of
&lt;a href="https://github.com/kubernetes/community/blob/master/sig-node/README.md">SIG Node&lt;/a>,
who will shed some light on their roles, challenges, and the exciting developments within SIG Node.&lt;/p>
&lt;p>&lt;em>Answers given collectively by all interviewees will be marked by their initials.&lt;/em>&lt;/p>
&lt;h2 id="introductions">Introductions&lt;/h2>
&lt;p>&lt;strong>Arpit:&lt;/strong> Thank you for joining us today. Could you please introduce yourselves and provide a brief
overview of your roles within SIG Node?&lt;/p>
&lt;p>&lt;strong>Matthias:&lt;/strong> My name is Matthias Bertschy, I am French and live next to Lake Geneva, near the
French Alps. I have been a Kubernetes contributor since 2017, a reviewer for SIG Node and a
maintainer of &lt;a href="https://docs.prow.k8s.io/docs/overview/">Prow&lt;/a>. I work as a Senior Kubernetes
Developer for a security startup named &lt;a href="https://www.armosec.io/">ARMO&lt;/a>, which donated
&lt;a href="https://www.cncf.io/projects/kubescape/">Kubescape&lt;/a> to the CNCF.&lt;/p>
&lt;p>&lt;img alt="Lake Geneva and the Alps" src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/sig-node-spotlight/Lake_Geneva_and_the_Alps.jpg">&lt;/p>
&lt;p>&lt;strong>Gunju:&lt;/strong> My name is Gunju Kim. I am a software engineer at
&lt;a href="https://www.navercorp.com/naver/naverMain">NAVER&lt;/a>, where I focus on developing a cloud platform for
search services. I have been contributing to the Kubernetes project in my free time since 2021.&lt;/p>
&lt;p>&lt;strong>Sergey:&lt;/strong> My name is Sergey Kanzhelev. I have worked on Kubernetes and &lt;a href="https://cloud.google.com/kubernetes-engine">Google Kubernetes
Engine&lt;/a> for 3 years and have worked on open-source
projects for many years now. I am a chair of SIG Node.&lt;/p>
&lt;h2 id="understanding-sig-node">Understanding SIG Node&lt;/h2>
&lt;p>&lt;strong>Arpit:&lt;/strong> Thank you! Could you provide our readers with an overview of SIG Node&amp;rsquo;s responsibilities
within the Kubernetes ecosystem?&lt;/p>
&lt;p>&lt;strong>M/G/S:&lt;/strong> SIG Node is one of the first if not the very first SIG in Kubernetes. The SIG is
responsible for all iterations between Kubernetes and node resources, as well as node maintenance
itself. This is quite a large scope, and the SIG owns a large part of the Kubernetes codebase. Because
of this wide ownership, SIG Node is always in contact with other SIGs such as SIG Network, SIG
Storage, and SIG Security and almost any new features and developments in Kubernetes involves SIG
Node in some way.&lt;/p>
&lt;p>&lt;strong>Arpit&lt;/strong>: How does SIG Node contribute to Kubernetes&amp;rsquo; performance and stability?&lt;/p>
&lt;p>&lt;strong>M/G/S:&lt;/strong> Kubernetes works on nodes of many different sizes and shapes, from small physical VMs
with cheap hardware to large AI/ML-optimized GPU-enabled nodes. Nodes may stay online for months or
maybe be short-lived and be preempted at any moment as they are running on excess compute of a cloud
provider.&lt;/p>
&lt;p>&lt;a href="https://kubernetes.io/docs/concepts/overview/components/#kubelet">&lt;code>kubelet&lt;/code>&lt;/a> — the
Kubernetes agent on a node — must work in all these environments reliably. As for the performance
of kubelet operations, this is becoming increasingly important today. On one hand, as Kubernetes is
being used on extra small nodes more and more often in telecom and retail environments, it needs to
scale into the smallest footprint possible. On the other hand, with AI/ML workloads where every node
is extremely expensive, every second of delayed operations can visibly change the price of
computation.&lt;/p>
&lt;h2 id="challenges-and-opportunities">Challenges and Opportunities&lt;/h2>
&lt;p>&lt;strong>Arpit:&lt;/strong> What upcoming challenges and opportunities is SIG Node keeping an eye on?&lt;/p>
&lt;p>&lt;strong>M/G/S:&lt;/strong> As Kubernetes enters the second decade of its life, we see a huge demand to support new
workload types. And SIG Node will play a big role in this. The Sidecar KEP, which we will be talking
about later, is one of the examples of increased emphasis on supporting new workload types.&lt;/p>
&lt;p>The key challenge we will have in the next few years is how to keep innovations while maintaining
high quality and backward compatibility of existing scenarios. SIG Node will continue to play a
central role in Kubernetes.&lt;/p>
&lt;p>&lt;strong>Arpit:&lt;/strong> And are there any ongoing research or development areas within SIG Node that excite you?&lt;/p>
&lt;p>&lt;strong>M/G/S:&lt;/strong> Supporting new workload types is a fascinating area for us. Our recent exploration of
sidecar containers is a testament to this. Sidecars offer a versatile solution for enhancing
application functionality without altering the core codebase.&lt;/p>
&lt;p>&lt;strong>Arpit:&lt;/strong> What are some of the challenges you&amp;rsquo;ve faced while maintaining SIG Node, and how have you
overcome them?&lt;/p>
&lt;p>&lt;strong>M/G/S:&lt;/strong> The biggest challenge of SIG Node is its size and the many feature requests it
receives. We are encouraging more people to join as reviewers and are always open to improving
processes and addressing feedback. For every release, we run the feedback session at the SIG Node
meeting and identify problematic areas and action items.&lt;/p>
&lt;p>&lt;strong>Arpit:&lt;/strong> Are there specific technologies or advancements that SIG Node is closely monitoring or
integrating into Kubernetes?&lt;/p>
&lt;p>&lt;strong>M/G/S:&lt;/strong> Developments in components that the SIG depends on, like &lt;a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes/">container
runtimes&lt;/a>
(e.g. &lt;a href="https://containerd.io/">containerd&lt;/a> and &lt;a href="https://cri-o.io/">CRI-O&lt;/a>, and OS features are
something we contribute to and monitor closely. For example, there is an upcoming &lt;em>cgroup v1&lt;/em>
deprecation and removal that Kubernetes and SIG Node will need to guide Kubernetes users
through. Containerd is also releasing version &lt;code>2.0&lt;/code>, which removes deprecated features, which will
affect Kubernetes users.&lt;/p>
&lt;p>&lt;strong>Arpit:&lt;/strong> Could you share a memorable experience or achievement from your time as a SIG Node
maintainer that you&amp;rsquo;re particularly proud of?&lt;/p>
&lt;p>&lt;strong>Matthias:&lt;/strong> I think the best moment was when my first KEP (introducing the
&lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes">&lt;code>startupProbe&lt;/code>&lt;/a>)
finally graduated to GA (General Availability). I also enjoy seeing my contributions being used
daily by contributors, such as the comment containing the GitHub tree hash used to retain LGTM
despite squash commits.&lt;/p>
&lt;h2 id="sidecar-containers">Sidecar containers&lt;/h2>
&lt;p>&lt;strong>Arpit:&lt;/strong> Can you provide more context on the concept of sidecar containers and their evolution in
the context of Kubernetes?&lt;/p>
&lt;p>&lt;strong>M/G/S:&lt;/strong> The concept of &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/">sidecar
containers&lt;/a> dates back to
2015 when Kubernetes introduced the idea of composite containers. These additional containers,
running alongside the main application container within the same pod, were seen as a way to extend
and enhance application functionality without modifying the core codebase. Early adopters of
sidecars employed custom scripts and configurations to manage them, but this approach presented
challenges in terms of consistency and scalability.&lt;/p>
&lt;p>&lt;strong>Arpit:&lt;/strong> Can you share specific use cases or examples where sidecar containers are particularly
beneficial?&lt;/p>
&lt;p>&lt;strong>M/G/S:&lt;/strong> Sidecar containers are a versatile tool that can be used to enhance the functionality of
applications in a variety of ways:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Logging and monitoring:&lt;/strong> Sidecar containers can be used to collect logs and metrics from the
primary application container and send them to a centralized logging and monitoring system.&lt;/li>
&lt;li>&lt;strong>Traffic filtering and routing:&lt;/strong> Sidecar containers can be used to filter and route traffic to
and from the primary application container.&lt;/li>
&lt;li>&lt;strong>Encryption and decryption:&lt;/strong> Sidecar containers can be used to encrypt and decrypt data as it
flows between the primary application container and external services.&lt;/li>
&lt;li>&lt;strong>Data synchronization:&lt;/strong> Sidecar containers can be used to synchronize data between the primary
application container and external databases or services.&lt;/li>
&lt;li>&lt;strong>Fault injection:&lt;/strong> Sidecar containers can be used to inject faults into the primary application
container in order to test its resilience to failures.&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Arpit:&lt;/strong> The proposal mentions that some companies are using a fork of Kubernetes with sidecar
functionality added. Can you provide insights into the level of adoption and community interest in
this feature?&lt;/p>
&lt;p>&lt;strong>M/G/S:&lt;/strong> While we lack concrete metrics to measure adoption rates, the KEP has garnered
significant interest from the community, particularly among service mesh vendors like Istio, who
actively participated in its alpha testing phase. The KEP&amp;rsquo;s visibility through numerous blog posts,
interviews, talks, and workshops further demonstrates its widespread appeal. The KEP addresses the
growing demand for additional capabilities alongside main containers in Kubernetes pods, such as
network proxies, logging systems, and security measures. The community acknowledges the importance
of providing easy migration paths for existing workloads to facilitate widespread adoption of the
feature.&lt;/p>
&lt;p>&lt;strong>Arpit:&lt;/strong> Are there any notable examples or success stories from companies using sidecar containers
in production?&lt;/p>
&lt;p>&lt;strong>M/G/S:&lt;/strong> It is still too early to expect widespread adoption in production environments. The 1.29
release has only been available in Google Kubernetes Engine (GKE) since January 11, 2024, and there
still needs to be comprehensive documentation on how to enable and use them effectively via
universal injector. Istio, a popular service mesh platform, also lacks proper documentation for
enabling native sidecars, making it difficult for developers to get started with this new
feature. However, as native sidecar support matures and documentation improves, we can expect to see
wider adoption of this technology in production environments.&lt;/p>
&lt;p>&lt;strong>Arpit:&lt;/strong> The proposal suggests introducing a &lt;code>restartPolicy&lt;/code> field for init containers to indicate
sidecar functionality. Can you explain how this solution addresses the outlined challenges?&lt;/p>
&lt;p>&lt;strong>M/G/S:&lt;/strong> The proposal to introduce a &lt;code>restartPolicy&lt;/code> field for init containers addresses the
outlined challenges by utilizing existing infrastructure and simplifying sidecar management. This
approach avoids adding new fields to the pod specification, keeping it manageable and avoiding more
clutter. By leveraging the existing init container mechanism, sidecars can be run alongside regular
init containers during pod startup, ensuring a consistent ordering of initialization. Additionally,
setting the restart policy of sidecar init containers to &lt;code>Always&lt;/code> explicitly states that they continue
running even after the main application container terminates, enabling persistent services like
logging and monitoring until the end of the workload.&lt;/p>
&lt;p>&lt;strong>Arpit:&lt;/strong> How will the introduction of the &lt;code>restartPolicy&lt;/code> field for init containers affect
backward compatibility with existing Kubernetes configurations?&lt;/p>
&lt;p>&lt;strong>M/G/S:&lt;/strong> The introduction of the &lt;code>restartPolicy&lt;/code> field for init containers will maintain backward
compatibility with existing Kubernetes configurations. Existing init containers will continue to
function as they have before, and the new &lt;code>restartPolicy&lt;/code> field will only apply to init containers
explicitly marked as sidecars. This approach ensures that existing applications and deployments will
not be disrupted by the new feature, and provides a more streamlined way to define and manage
sidecars.&lt;/p>
&lt;h2 id="contributing-to-sig-node">Contributing to SIG Node&lt;/h2>
&lt;p>&lt;strong>Arpit:&lt;/strong> What is the best place for the new members and especially beginners to contribute?&lt;/p>
&lt;p>&lt;strong>M/G/S:&lt;/strong> New members and beginners can contribute to the Sidecar KEP (Kubernetes Enhancement
Proposal) by:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Raising awareness:&lt;/strong> Create content that highlights the benefits and use cases of sidecars. This
can educate others about the feature and encourage its adoption.&lt;/li>
&lt;li>&lt;strong>Providing feedback:&lt;/strong> Share your experiences with sidecars, both positive and negative. This
feedback can be used to improve the feature and make it more widely usable.&lt;/li>
&lt;li>&lt;strong>Sharing your use cases:&lt;/strong> If you are using sidecars in production,
share your experiences with others. This can help to demonstrate the
real-world value of the feature and encourage others to adopt it.&lt;/li>
&lt;li>&lt;strong>Improving the documentation:&lt;/strong> Help to clarify and expand the documentation for the
feature. This can make it easier for others to understand and use sidecars.&lt;/li>
&lt;/ul>
&lt;p>In addition to the Sidecar KEP, there are many other areas where SIG Node needs more contributors:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Test coverage:&lt;/strong> SIG Node is always looking for ways to improve the test coverage of Kubernetes
components.&lt;/li>
&lt;li>&lt;strong>CI maintenance:&lt;/strong> SIG Node maintains a suite of e2e tests ensuring Kubernetes components
function as intended across a variety of scenarios.&lt;/li>
&lt;/ul>
&lt;h1 id="conclusion">Conclusion&lt;/h1>
&lt;p>In conclusion, SIG Node stands as a cornerstone in Kubernetes&amp;rsquo; journey, ensuring its reliability and
adaptability in the ever-changing landscape of cloud-native computing. With dedicated members like
Matthias, Gunju, and Sergey leading the charge, SIG Node remains at the forefront of innovation,
driving Kubernetes towards new horizons.&lt;/p></description></item><item><title>Blog: Introducing Hydrophone</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/05/23/introducing-hydrophone/</link><pubDate>Thu, 23 May 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/05/23/introducing-hydrophone/</guid><description>
&lt;p>In the ever-changing landscape of Kubernetes, ensuring that clusters operate as intended is
essential. This is where conformance testing becomes crucial, verifying that a Kubernetes cluster
meets the required standards set by the community. Today, we&amp;rsquo;re thrilled to introduce
&lt;a href="https://github.com/kubernetes-sigs/hydrophone/">&lt;em>Hydrophone&lt;/em>&lt;/a>, a lightweight runner designed to
streamline Kubernetes tests using the official conformance images released by the Kubernetes release
team.&lt;/p>
&lt;h2 id="simplified-kubernetes-testing-with-hydrophone">Simplified Kubernetes testing with Hydrophone&lt;/h2>
&lt;p>Hydrophone&amp;rsquo;s design philosophy centers around ease of use. By starting the conformance image as a
pod within the &lt;em>conformance&lt;/em> namespace, Hydrophone waits for the tests to conclude, then prints and
exports the results. This approach offers a hassle-free method for running either individual tests
or the entire &lt;a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md">Conformance Test
Suite&lt;/a>.&lt;/p>
&lt;h2 id="key-features-of-hydrophone">Key features of Hydrophone&lt;/h2>
&lt;ul>
&lt;li>&lt;strong>Ease of Use&lt;/strong>: Designed with simplicity in mind, Hydrophone provides an easy-to-use tool for
conducting Kubernetes conformance tests.&lt;/li>
&lt;li>&lt;strong>Official Conformance Images&lt;/strong>: It leverages the official conformance images from the Kubernetes
Release Team, ensuring that you&amp;rsquo;re using the most up-to-date and reliable resources for testing.&lt;/li>
&lt;li>&lt;strong>Flexible Test Execution&lt;/strong>: Whether you need to run a single test, the entire Conformance Test
Suite, or anything in between.&lt;/li>
&lt;/ul>
&lt;h2 id="streamlining-kubernetes-conformance-with-hydrophone">Streamlining Kubernetes conformance with Hydrophone&lt;/h2>
&lt;p>In the Kubernetes world, where providers like EKS, Rancher, and k3s offer diverse environments,
ensuring consistent experiences is vital. This consistency is anchored in conformance testing, which
validates whether these environments adhere to Kubernetes community standards. Historically, this
validation has either been cumbersome or requires third-party tools. Hydrophone offers a simple,
single binary tool that streamlines running these essential conformance tests. It&amp;rsquo;s designed to be
user-friendly, allowing for straightforward validation of Kubernetes clusters against community
benchmarks, ensuring providers can offer a certified, consistent service.&lt;/p>
&lt;p>Hydrophone doesn&amp;rsquo;t aim to replace the myriad of Kubernetes testing frameworks out there but rather
to complement them. It focuses on facilitating conformance tests efficiently, without developing new
tests or heavy integration with other tools.&lt;/p>
&lt;h2 id="getting-started-with-hydrophone">Getting started with Hydrophone&lt;/h2>
&lt;p>Installing Hydrophone is straightforward. You need a Go development environment; once you have that:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>go install sigs.k8s.io/hydrophone@latest
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Running &lt;code>hydrophone&lt;/code> by default will:&lt;/p>
&lt;ul>
&lt;li>Create a pod, and supporting resources in the &lt;code>conformance&lt;/code> namespace on your cluster.&lt;/li>
&lt;li>Execute the entire conformance test suite for the cluster version you&amp;rsquo;re running.&lt;/li>
&lt;li>Output the test results and export &lt;code>e2e.log&lt;/code> and &lt;code>junit_01.xml&lt;/code> needed for conformance validation.&lt;/li>
&lt;/ul>
&lt;p>There are supporting flags to specify which tests to run, which to skip, the cluster you&amp;rsquo;re targeting and much more!&lt;/p>
&lt;h2 id="community-and-contributions">Community and contributions&lt;/h2>
&lt;p>The Hydrophone project is part of SIG Testing and open to the community for bugs, feature requests,
and other contributions. You can engage with the project maintainers via Kubernetes Slack channels
&lt;em>#hydrophone&lt;/em>, &lt;em>#sig-testing&lt;/em>, and &lt;em>#k8s-conformance&lt;/em>, or by filing an issue against the
repository. We&amp;rsquo;re also active in the Kubernetes SIG-Testing and SIG-Release Mailing Lists. We
encourage pull requests and discussions to make Hydrophone even better.&lt;/p>
&lt;h2 id="join-us-in-simplifying-kubernetes-testing">Join us in simplifying Kubernetes testing&lt;/h2>
&lt;p>In SIG Testing, we believe Hydrophone will be a valuable tool for anyone looking to validate the conformance of
their Kubernetes clusters easily. Whether you&amp;rsquo;re developing new features, or testing your
application, Hydrophone offers an efficient testing experience.&lt;/p></description></item><item><title>Blog: Spotlight on SIG Architecture: Code Organization</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/04/11/sig-architecture-code-spotlight-2024/</link><pubDate>Thu, 11 Apr 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/04/11/sig-architecture-code-spotlight-2024/</guid><description>
&lt;p>&lt;em>This is the third interview of a SIG Architecture Spotlight series that will cover the different
subprojects. We will cover &lt;a href="https://github.com/kubernetes/community/blob/e44c2c9d0d3023e7111d8b01ac93d54c8624ee91/sig-architecture/README.md#code-organization">SIG Architecture: Code Organization&lt;/a>.&lt;/em>&lt;/p>
&lt;p>In this SIG Architecture spotlight I talked with &lt;a href="https://github.com/MadhavJivrajani">Madhav Jivrajani&lt;/a>
(VMware), a member of the Code Organization subproject.&lt;/p>
&lt;h2 id="introducing-the-code-organization-subproject">Introducing the Code Organization subproject&lt;/h2>
&lt;p>&lt;strong>Frederico (FSM)&lt;/strong>: Hello Madhav, thank you for your availability. Could you start by telling us a
bit about yourself, your role and how you got involved in Kubernetes?&lt;/p>
&lt;p>&lt;strong>Madhav Jivrajani (MJ)&lt;/strong>: Hello! My name is Madhav Jivrajani, I serve as a technical lead for SIG
Contributor Experience and a GitHub Admin for the Kubernetes project. Apart from that I also
contribute to SIG API Machinery and SIG Etcd, but more recently, I’ve been helping out with the work
that is needed to help Kubernetes &lt;a href="https://github.com/kubernetes/enhancements/tree/cf6ee34e37f00d838872d368ec66d7a0b40ee4e6/keps/sig-release/3744-stay-on-supported-go-versions">stay on supported versions of
Go&lt;/a>,
and it is through this that I am involved with the Code Organization subproject of SIG Architecture.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: A project the size of Kubernetes must have unique challenges in terms of code organization
&amp;ndash; is this a fair assumption? If so, what would you pick as some of the main challenges that are
specific to Kubernetes?&lt;/p>
&lt;p>&lt;strong>MJ&lt;/strong>: That’s a fair assumption! The first interesting challenge comes from the sheer size of the
Kubernetes codebase. We have ≅2.2 million lines of Go code (which is steadily decreasing thanks to
&lt;a href="https://github.com/dims">dims&lt;/a> and other folks in this sub-project!), and a little over 240
dependencies that we rely on either directly or indirectly, which is why having a sub-project
dedicated to helping out with dependency management is crucial: we need to know what dependencies
we’re pulling in, what versions these dependencies are at, and tooling to help make sure we are
managing these dependencies across different parts of the codebase in a consistent manner.&lt;/p>
&lt;p>Another interesting challenge with Kubernetes is that we publish a lot of Go modules as part of the
Kubernetes release cycles, one example of this is
&lt;a href="https://github.com/kubernetes/client-go">&lt;code>client-go&lt;/code>&lt;/a>.However, we as a project would also like the
benefits of having everything in one repository to get the advantages of using a monorepo, like
atomic commits&amp;hellip; so, because of this, code organization works with other SIGs (like SIG Release) to
automate the process of publishing code from the monorepo to downstream individual repositories
which are much easier to consume, and this way you won’t have to import the entire Kubernetes
codebase!&lt;/p>
&lt;h2 id="code-organization-and-kubernetes">Code organization and Kubernetes&lt;/h2>
&lt;p>&lt;strong>FSM&lt;/strong>: For someone just starting contributing to Kubernetes code-wise, what are the main things
they should consider in terms of code organization? How would you sum up the key concepts?&lt;/p>
&lt;p>&lt;strong>MJ&lt;/strong>: I think one of the key things to keep in mind at least as you’re starting off is the concept
of staging directories. In the &lt;a href="https://github.com/kubernetes/kubernetes">&lt;code>kubernetes/kubernetes&lt;/code>&lt;/a>
repository, you will come across a directory called
&lt;a href="https://github.com/kubernetes/kubernetes/tree/master/staging">&lt;code>staging/&lt;/code>&lt;/a>. The sub-folders in this
directory serve as a bunch of pseudo-repositories. For example, the
&lt;a href="https://github.com/kubernetes/client-go">&lt;code>kubernetes/client-go&lt;/code>&lt;/a> repository that publishes releases
for &lt;code>client-go&lt;/code> is actually a &lt;a href="https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go">staging
repo&lt;/a>.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: So the concept of staging directories fundamentally impact contributions?&lt;/p>
&lt;p>&lt;strong>MJ&lt;/strong>: Precisely, because if you’d like to contribute to any of the staging repos, you will need to
send in a PR to its corresponding staging directory in &lt;code>kubernetes/kubernetes&lt;/code>. Once the code merges
there, we have a bot called the &lt;a href="https://github.com/kubernetes/publishing-bot">&lt;code>publishing-bot&lt;/code>&lt;/a>
that will sync the merged commits to the required staging repositories (like
&lt;code>kubernetes/client-go&lt;/code>). This way we get the benefits of a monorepo but we also can modularly
publish code for downstream consumption. PS: The &lt;code>publishing-bot&lt;/code> needs more folks to help out!&lt;/p>
&lt;p>For more information on staging repositories, please see the &lt;a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/staging.md">contributor
documentation&lt;/a>.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: Speaking of contributions, the very high number of contributors, both individuals and
companies, must also be a challenge: how does the subproject operate in terms of making sure that
standards are being followed?&lt;/p>
&lt;p>&lt;strong>MJ&lt;/strong>: When it comes to dependency management in the project, there is a &lt;a href="https://github.com/kubernetes/org/blob/a106af09b8c345c301d072bfb7106b309c0ad8e9/config/kubernetes/org.yaml#L1329">dedicated
team&lt;/a>
that helps review and approve dependency changes. These are folks who have helped lay the foundation
of much of the
&lt;a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/vendor.md">tooling&lt;/a>
that Kubernetes uses today for dependency management. This tooling helps ensure there is a
consistent way that contributors can make changes to dependencies. The project has also worked on
additional tooling to signal statistics of dependencies that is being added or removed:
&lt;a href="https://github.com/kubernetes-sigs/depstat">&lt;code>depstat&lt;/code>&lt;/a>&lt;/p>
&lt;p>Apart from dependency management, another crucial task that the project does is management of the
staging repositories. The tooling for achieving this (&lt;code>publishing-bot&lt;/code>) is completely transparent to
contributors and helps ensure that the staging repos get a consistent view of contributions that are
submitted to &lt;code>kubernetes/kubernetes&lt;/code>.&lt;/p>
&lt;p>Code Organization also works towards making sure that Kubernetes &lt;a href="https://github.com/kubernetes/enhancements/tree/cf6ee34e37f00d838872d368ec66d7a0b40ee4e6/keps/sig-release/3744-stay-on-supported-go-versions">stays on supported versions of
Go&lt;/a>. The
linked KEP provides more context on why we need to do this. We collaborate with SIG Release to
ensure that we are testing Kubernetes as rigorously and as early as we can on Go releases and
working on changes that break our CI as a part of this. An example of how we track this process can
be found &lt;a href="https://github.com/kubernetes/release/issues/3076">here&lt;/a>.&lt;/p>
&lt;h2 id="release-cycle-and-current-priorities">Release cycle and current priorities&lt;/h2>
&lt;p>&lt;strong>FSM&lt;/strong>: Is there anything that changes during the release cycle?&lt;/p>
&lt;p>&lt;strong>MJ&lt;/strong> During the release cycle, specifically before code freeze, there are often changes that go in
that add/update/delete dependencies, fix code that needs fixing as part of our effort to stay on
supported versions of Go.&lt;/p>
&lt;p>Furthermore, some of these changes are also candidates for
&lt;a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-release/cherry-picks.md">backporting&lt;/a>
to our supported release branches.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: Is there any major project or theme the subproject is working on right now that you would
like to highlight?&lt;/p>
&lt;p>&lt;strong>MJ&lt;/strong>: I think one very interesting and immensely useful change that
has been recently added (and I take the opportunity to specifically
highlight the work of &lt;a href="https://github.com/thockin">Tim Hockin&lt;/a> on
this) is the introduction of &lt;a href="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/03/19/go-workspaces-in-kubernetes/">Go workspaces to the Kubernetes
repo&lt;/a>. A lot of our
current tooling for dependency management and code publishing, as well
as the experience of editing code in the Kubernetes repo, can be
significantly improved by this change.&lt;/p>
&lt;h2 id="wrapping-up">Wrapping up&lt;/h2>
&lt;p>&lt;strong>FSM&lt;/strong>: How would someone interested in the topic start helping the subproject?&lt;/p>
&lt;p>&lt;strong>MJ&lt;/strong>: The first step, as is the first step with any project in Kubernetes, is to join our slack:
&lt;a href="https://slack.k8s.io">slack.k8s.io&lt;/a>, and after that join the &lt;code>#k8s-code-organization&lt;/code> channel. There is also a
&lt;a href="https://github.com/kubernetes/community/tree/master/sig-architecture#meetings">code-organization office
hours&lt;/a> that takes
place that you can choose to attend. Timezones are hard, so feel free to also look at the recordings
or meeting notes and follow up on slack!&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: Excellent, thank you! Any final comments you would like to share?&lt;/p>
&lt;p>&lt;strong>MJ&lt;/strong>: The Code Organization subproject always needs help! Especially areas like the publishing
bot, so don’t hesitate to get involved in the &lt;code>#k8s-code-organization&lt;/code> Slack channel.&lt;/p></description></item><item><title>Blog: Using Go workspaces in Kubernetes</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/03/19/go-workspaces-in-kubernetes/</link><pubDate>Tue, 19 Mar 2024 08:30:00 -0800</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/03/19/go-workspaces-in-kubernetes/</guid><description>
&lt;p>The &lt;a href="https://go.dev/">Go programming language&lt;/a> has played a huge role in the
success of Kubernetes. As Kubernetes has grown, matured, and pushed the bounds
of what &amp;ldquo;regular&amp;rdquo; projects do, the Go project team has also grown and evolved
the language and tools. In recent releases, Go introduced a feature called
&amp;ldquo;workspaces&amp;rdquo; which was aimed at making projects like Kubernetes easier to
manage.&lt;/p>
&lt;p>We&amp;rsquo;ve just completed a major effort to adopt workspaces in Kubernetes, and the
results are great. Our codebase is simpler and less error-prone, and we&amp;rsquo;re no
longer off on our own technology island.&lt;/p>
&lt;h2 id="gopath-and-go-modules">GOPATH and Go modules&lt;/h2>
&lt;p>Kubernetes is one of the most visible open source projects written in Go. The
earliest versions of Kubernetes, dating back to 2014, were built with Go 1.3.
Today, 10 years later, Go is up to version 1.22 — and let&amp;rsquo;s just say that a
&lt;em>whole lot&lt;/em> has changed.&lt;/p>
&lt;p>In 2014, Go development was entirely based on
&lt;a href="https://go.dev/wiki/GOPATH">&lt;code>GOPATH&lt;/code>&lt;/a>. As a Go project, Kubernetes lived by the
rules of &lt;code>GOPATH&lt;/code>. In the buildup to Kubernetes 1.4 (mid 2016), we introduced a
directory tree called &lt;code>staging&lt;/code>. This allowed us to pretend to be multiple
projects, but still exist within one git repository (which had advantages for
development velocity). The magic of &lt;code>GOPATH&lt;/code> allowed this to work.&lt;/p>
&lt;p>Kubernetes depends on several code-generation tools which have to find, read,
and write Go code packages. Unsurprisingly, those tools grew to rely on
&lt;code>GOPATH&lt;/code>. This all worked pretty well until Go introduced modules in Go 1.11
(mid 2018).&lt;/p>
&lt;p>Modules were an answer to many issues around &lt;code>GOPATH&lt;/code>. They gave more control to
projects on how to track and manage dependencies, and were overall a great step
forward. Kubernetes adopted them. However, modules had one major drawback —
most Go tools could not work on multiple modules at once. This was a problem
for our code-generation tools and scripts.&lt;/p>
&lt;p>Thankfully, Go offered a way to temporarily disable modules (&lt;code>GO111MODULE&lt;/code> to
the rescue). We could get the dependency tracking benefits of modules, but the
flexibility of &lt;code>GOPATH&lt;/code> for our tools. We even wrote helper tools to create fake
&lt;code>GOPATH&lt;/code> trees and played tricks with symlinks in our vendor directory (which
holds a snapshot of our external dependencies), and we made it all work.&lt;/p>
&lt;p>And for the last 5 years it &lt;em>has&lt;/em> worked pretty well. That is, it worked well
unless you looked too closely at what was happening. Woe be upon you if you
had the misfortune to work on one of the code-generation tools, or the build
system, or the ever-expanding suite of bespoke shell scripts we use to glue
everything together.&lt;/p>
&lt;h2 id="the-problems">The problems&lt;/h2>
&lt;p>Like any large software project, we Kubernetes developers have all learned to
deal with a certain amount of constant low-grade pain. Our custom &lt;code>staging&lt;/code>
mechanism let us bend the rules of Go; it was a little clunky, but when it
worked (which was most of the time) it worked pretty well. When it failed, the
errors were inscrutable and un-Googleable — nobody else was doing the silly
things we were doing. Usually the fix was to re-run one or more of the &lt;code>update-*&lt;/code>
shell scripts in our aptly named &lt;code>hack&lt;/code> directory.&lt;/p>
&lt;p>As time went on we drifted farther and farher from &amp;ldquo;regular&amp;rdquo; Go projects. At
the same time, Kubernetes got more and more popular. For many people,
Kubernetes was their first experience with Go, and it wasn&amp;rsquo;t always a good
experience.&lt;/p>
&lt;p>Our eccentricities also impacted people who consumed some of our code, such as
our client library and the code-generation tools (which turned out to be useful
in the growing ecosystem of custom resources). The tools only worked if you
stored your code in a particular &lt;code>GOPATH&lt;/code>-compatible directory structure, even
though &lt;code>GOPATH&lt;/code> had been replaced by modules more than four years prior.&lt;/p>
&lt;p>This state persisted because of the confluence of three factors:&lt;/p>
&lt;ol>
&lt;li>Most of the time it only hurt a little (punctuated with short moments of
more acute pain).&lt;/li>
&lt;li>Kubernetes was still growing in popularity - we all had other, more urgent
things to work on.&lt;/li>
&lt;li>The fix was not obvious, and whatever we came up with was going to be both
hard and tedious.&lt;/li>
&lt;/ol>
&lt;p>As a Kubernetes maintainer and long-timer, my fingerprints were all over the
build system, the code-generation tools, and the &lt;code>hack&lt;/code> scripts. While the pain
of our mess may have been low &lt;em>on_average&lt;/em>, I was one of the people who felt it
regularly.&lt;/p>
&lt;h2 id="enter-workspaces">Enter workspaces&lt;/h2>
&lt;p>Along the way, the Go language team saw what we (and others) were doing and
didn&amp;rsquo;t love it. They designed a new way of stitching multiple modules together
into a new &lt;em>workspace&lt;/em> concept. Once enrolled in a workspace, Go tools had
enough information to work in any directory structure and across modules,
without &lt;code>GOPATH&lt;/code> or symlinks or other dirty tricks.&lt;/p>
&lt;p>When I first saw this proposal I knew that this was the way out. This was how
to break the logjam. If workspaces was the technical solution, then I would
put in the work to make it happen.&lt;/p>
&lt;h2 id="the-work">The work&lt;/h2>
&lt;p>Adopting workspaces was deceptively easy. I very quickly had the codebase
compiling and running tests with workspaces enabled. I set out to purge the
repository of anything &lt;code>GOPATH&lt;/code> related. That&amp;rsquo;s when I hit the first real bump -
the code-generation tools.&lt;/p>
&lt;p>We had about a dozen tools, totalling several thousand lines of code. All of
them were built using an internal framework called
&lt;a href="https://github.com/kubernetes/gengo">gengo&lt;/a>, which was built on Go&amp;rsquo;s own
parsing libraries. There were two main problems:&lt;/p>
&lt;ol>
&lt;li>Those parsing libraries didn&amp;rsquo;t understand modules or workspaces.&lt;/li>
&lt;li>&lt;code>GOPATH&lt;/code> allowed us to pretend that Go &lt;em>package paths&lt;/em> and directories on
disk were interchangeable in trivial ways. They are not.&lt;/li>
&lt;/ol>
&lt;p>Switching to a
&lt;a href="https://pkg.go.dev/golang.org/x/tools/go/packages">modules- and workspaces-aware parsing&lt;/a>
library was the first step. Then I had to make a long series of changes to
each of the code-generation tools. Critically, I had to find a way to do it
that was possible for some other person to review! I knew that I needed
reviewers who could cover the breadth of changes and reviewers who could go
into great depth on specific topics like gengo and Go&amp;rsquo;s module semantics.
Looking at the history for the areas I was touching, I asked Joe Betz and Alex
Zielenski (SIG API Machinery) to go deep on gengo and code-generation, Jordan
Liggitt (SIG Architecture and all-around wizard) to cover Go modules and
vendoring and the &lt;code>hack&lt;/code> scripts, and Antonio Ojea (wearing his SIG Testing
hat) to make sure the whole thing made sense. We agreed that a series of small
commits would be easiest to review, even if the codebase might not actually
work at each commit.&lt;/p>
&lt;p>Sadly, these were not mechanical changes. I had to dig into each tool to
figure out where they were processing disk paths versus where they were
processing package names, and where those were being conflated. I made
extensive use of the &lt;a href="https://github.com/go-delve/delve">delve&lt;/a> debugger, which
I just can&amp;rsquo;t say enough good things about.&lt;/p>
&lt;p>One unfortunate result of this work was that I had to break compatibility. The
gengo library simply did not have enough information to process packages
outside of GOPATH. After discussion with gengo and Kubernetes maintainers, we
agreed to make &lt;a href="https://github.com/kubernetes/gengo/tree/master/v2">gengo/v2&lt;/a>.
I also used this as an opportunity to clean up some of the gengo APIs and the
tools&amp;rsquo; CLIs to be more understandable and not conflate packages and
directories. For example you can&amp;rsquo;t just string-join directory names and
assume the result is a valid package name.&lt;/p>
&lt;p>Once I had the code-generation tools converted, I shifted attention to the
dozens of scripts in the &lt;code>hack&lt;/code> directory. One by one I had to run them, debug,
and fix failures. Some of them needed minor changes and some needed to be
rewritten.&lt;/p>
&lt;p>Along the way we hit some cases that Go did not support, like workspace
vendoring. Kubernetes depends on vendoring to ensure that our dependencies are
always available, even if their source code is removed from the internet (it
has happened more than once!). After discussing with the Go team, and looking
at possible workarounds, they decided the right path was to
&lt;a href="https://github.com/golang/go/issues/60056">implement workspace vendoring&lt;/a>.&lt;/p>
&lt;p>The eventual Pull Request contained over 200 individual commits.&lt;/p>
&lt;h2 id="results">Results&lt;/h2>
&lt;p>Now that this work has been merged, what does this mean for Kubernetes users?
Pretty much nothing. No features were added or changed. This work was not
about fixing bugs (and hopefully none were introduced).&lt;/p>
&lt;p>This work was mainly for the benefit of the Kubernetes project, to help and
simplify the lives of the core maintainers. In fact, it would not be a lie to
say that it was rather self-serving - my own life is a little bit better now.&lt;/p>
&lt;p>This effort, while unusually large, is just a tiny fraction of the overall
maintenance work that needs to be done. Like any large project, we have lots of
&amp;ldquo;technical debt&amp;rdquo; — tools that made point-in-time assumptions and need
revisiting, internal APIs whose organization doesn&amp;rsquo;t make sense, code which
doesn&amp;rsquo;t follow conventions which didn&amp;rsquo;t exist at the time, and tests which
aren&amp;rsquo;t as rigorous as they could be, just to throw out a few examples. This
work is often called &amp;ldquo;grungy&amp;rdquo; or &amp;ldquo;dirty&amp;rdquo;, but in reality it&amp;rsquo;s just an
indication that the project has grown and evolved. I love this stuff, but
there&amp;rsquo;s far more than I can ever tackle on my own, which makes it an
interesting way for people to get involved. As our unofficial motto goes:
&amp;ldquo;chop wood and carry water&amp;rdquo;.&lt;/p>
&lt;p>Kubernetes used to be a case-study of how &lt;em>not&lt;/em> to do large-scale Go
development, but now our codebase is simpler (and in some cases faster!) and
more consistent. Things that previously seemed like they &lt;em>should&lt;/em> work, but
didn&amp;rsquo;t, now behave as expected.&lt;/p>
&lt;p>Our project is now a little more &amp;ldquo;regular&amp;rdquo;. Not completely so, but we&amp;rsquo;re
getting closer.&lt;/p>
&lt;h2 id="thanks">Thanks&lt;/h2>
&lt;p>This effort would not have been possible without tons of support.&lt;/p>
&lt;p>First, thanks to the Go team for hearing our pain, taking feedback, and solving
the problems for us.&lt;/p>
&lt;p>Special mega-thanks goes to Michael Matloob, on the Go team at Google, who
designed and implemented workspaces. He guided me every step of the way, and
was very generous with his time, answering all my questions, no matter how
dumb.&lt;/p>
&lt;p>Writing code is just half of the work, so another special thanks to my
reviewers: Jordan Liggitt, Joe Betz, Alexander Zielenski, and Antonio Ojea.
These folks brought a wealth of expertise and attention to detail, and made
this work smarter and safer.&lt;/p></description></item><item><title>Blog: Spotlight on SIG Cloud Provider</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/03/01/sig-cloud-provider-spotlight-2024/</link><pubDate>Fri, 01 Mar 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/03/01/sig-cloud-provider-spotlight-2024/</guid><description>
&lt;p>One of the most popular ways developers use Kubernetes-related services is via cloud providers, but
have you ever wondered how cloud providers can do that? How does this whole process of integration
of Kubernetes to various cloud providers happen? To answer that, let&amp;rsquo;s put the spotlight on &lt;a href="https://github.com/kubernetes/community/blob/master/sig-cloud-provider/README.md">SIG
Cloud Provider&lt;/a>.&lt;/p>
&lt;p>SIG Cloud Provider works to create seamless integrations between Kubernetes and various cloud
providers. Their mission? Keeping the Kubernetes ecosystem fair and open for all. By setting clear
standards and requirements, they ensure every cloud provider plays nicely with Kubernetes. It is
their responsibility to configure cluster components to enable cloud provider integrations.&lt;/p>
&lt;p>In this blog of the SIG Spotlight series, &lt;a href="https://twitter.com/arujjval">Arujjwal Negi&lt;/a> interviews
&lt;a href="https://github.com/elmiko">Michael McCune&lt;/a> (Red Hat), also known as &lt;em>elmiko&lt;/em>, co-chair of SIG Cloud
Provider, to give us an insight into the workings of this group.&lt;/p>
&lt;h2 id="introduction">Introduction&lt;/h2>
&lt;p>&lt;strong>Arujjwal&lt;/strong>: Let&amp;rsquo;s start by getting to know you. Can you give us a small intro about yourself and
how you got into Kubernetes?&lt;/p>
&lt;p>&lt;strong>Michael&lt;/strong>: Hi, I’m Michael McCune, most people around the community call me by my handle,
&lt;em>elmiko&lt;/em>. I’ve been a software developer for a long time now (Windows 3.1 was popular when I
started!), and I’ve been involved with open-source software for most of my career. I first got
involved with Kubernetes as a developer of machine learning and data science applications; the team
I was on at the time was creating tutorials and examples to demonstrate the use of technologies like
Apache Spark on Kubernetes. That said, I’ve been interested in distributed systems for many years
and when an opportunity arose to join a team working directly on Kubernetes, I jumped at it!&lt;/p>
&lt;h2 id="functioning-and-working">Functioning and working&lt;/h2>
&lt;p>&lt;strong>Arujjwal&lt;/strong>: Can you give us an insight into what SIG Cloud Provider does and how it functions?&lt;/p>
&lt;p>&lt;strong>Michael&lt;/strong>: SIG Cloud Provider was formed to help ensure that Kubernetes provides a neutral
integration point for all infrastructure providers. Our largest task to date has been the extraction
and migration of in-tree cloud controllers to out-of-tree components. The SIG meets regularly to
discuss progress and upcoming tasks and also to answer questions and bugs that
arise. Additionally, we act as a coordination point for cloud provider subprojects such as the cloud
provider framework, specific cloud controller implementations, and the &lt;a href="https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/">Konnectivity proxy
project&lt;/a>.&lt;/p>
&lt;p>&lt;strong>Arujjwal:&lt;/strong> After going through the project
&lt;a href="https://github.com/kubernetes/community/blob/master/sig-cloud-provider/README.md">README&lt;/a>, I
learned that SIG Cloud Provider works with the integration of Kubernetes with cloud providers. How
does this whole process go?&lt;/p>
&lt;p>&lt;strong>Michael:&lt;/strong> One of the most common ways to run Kubernetes is by deploying it to a cloud environment
(AWS, Azure, GCP, etc). Frequently, the cloud infrastructures have features that enhance the
performance of Kubernetes, for example, by providing elastic load balancing for Service objects. To
ensure that cloud-specific services can be consistently consumed by Kubernetes, the Kubernetes
community has created cloud controllers to address these integration points. Cloud providers can
create their own controllers either by using the framework maintained by the SIG or by following
the API guides defined in the Kubernetes code and documentation. One thing I would like to point out
is that SIG Cloud Provider does not deal with the lifecycle of nodes in a Kubernetes cluster;
for those types of topics, SIG Cluster Lifecycle and the Cluster API project are more appropriate
venues.&lt;/p>
&lt;h2 id="important-subprojects">Important subprojects&lt;/h2>
&lt;p>&lt;strong>Arujjwal:&lt;/strong> There are a lot of subprojects within this SIG. Can you highlight some of the most
important ones and what job they do?&lt;/p>
&lt;p>&lt;strong>Michael:&lt;/strong> I think the two most important subprojects today are the &lt;a href="https://github.com/kubernetes/community/blob/master/sig-cloud-provider/README.md#kubernetes-cloud-provider">cloud provider
framework&lt;/a>
and the &lt;a href="https://github.com/kubernetes/community/blob/master/sig-cloud-provider/README.md#cloud-provider-extraction-migration">extraction/migration
project&lt;/a>. The
cloud provider framework is a common library to help infrastructure integrators build a cloud
controller for their infrastructure. This project is most frequently the starting point for new
people coming to the SIG. The extraction and migration project is the other big subproject and a
large part of why the framework exists. A little history might help explain further: for a long
time, Kubernetes needed some integration with the underlying infrastructure, not
necessarily to add features but to be aware of cloud events like instance termination. The cloud
provider integrations were built into the Kubernetes code tree, and thus the term &amp;ldquo;in-tree&amp;rdquo; was
created (check out this &lt;a href="https://kaslin.rocks/out-of-tree/">article on the topic&lt;/a> for more
info). The activity of maintaining provider-specific code in the main Kubernetes source tree was
considered undesirable by the community. The community’s decision inspired the creation of the
extraction and migration project to remove the &amp;ldquo;in-tree&amp;rdquo; cloud controllers in favor of
&amp;ldquo;out-of-tree&amp;rdquo; components.&lt;/p>
&lt;p>&lt;strong>Arujjwal:&lt;/strong> What makes [the cloud provider framework] a good place to start? Does it have consistent good beginner work? What
kind?&lt;/p>
&lt;p>&lt;strong>Michael:&lt;/strong> I feel that the cloud provider framework is a good place to start as it encodes the
community’s preferred practices for cloud controller managers and, as such, will give a newcomer a
strong understanding of how and what the managers do. Unfortunately, there is not a consistent
stream of beginner work on this component; this is due in part to the mature nature of the framework
and that of the individual providers as well. For folks who are interested in getting more involved,
having some &lt;a href="https://go.dev/">Go language&lt;/a> knowledge is good and also having an understanding of
how at least one cloud API (e.g., AWS, Azure, GCP) works is also beneficial. In my personal opinion,
being a newcomer to SIG Cloud Provider can be challenging as most of the code around this project
deals directly with specific cloud provider interactions. My best advice to people wanting to do
more work on cloud providers is to grow your familiarity with one or two cloud APIs, then look
for open issues on the controller managers for those clouds, and always communicate with the other
contributors as much as possible.&lt;/p>
&lt;h2 id="accomplishments">Accomplishments&lt;/h2>
&lt;p>&lt;strong>Arujjwal:&lt;/strong> Can you share about an accomplishment(s) of the SIG that you are proud of?&lt;/p>
&lt;p>&lt;strong>Michael:&lt;/strong> Since I joined the SIG, more than a year ago, we have made great progress in advancing
the extraction and migration subproject. We have moved from an alpha status on the defining
&lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/README.md">KEP&lt;/a> to a beta status and
are inching ever closer to removing the old provider code from the Kubernetes source tree. I&amp;rsquo;ve been
really proud to see the active engagement from our community members and to see the progress we have
made towards extraction. I have a feeling that, within the next few releases, we will see the final
removal of the in-tree cloud controllers and the completion of the subproject.&lt;/p>
&lt;h2 id="advice-for-new-contributors">Advice for new contributors&lt;/h2>
&lt;p>&lt;strong>Arujjwal:&lt;/strong> Is there any suggestion or advice for new contributors on how they can start at SIG
Cloud Provider?&lt;/p>
&lt;p>&lt;strong>Michael:&lt;/strong> This is a tricky question in my opinion. SIG Cloud Provider is focused on the code
pieces that integrate between Kubernetes and an underlying infrastructure. It is very common, but
not necessary, for members of the SIG to be representing a cloud provider in an official capacity. I
recommend that anyone interested in this part of Kubernetes should come to an SIG meeting to see how
we operate and also to study the cloud provider framework project. We have some interesting ideas
for future work, such as a common testing framework, that will cut across all cloud providers and
will be a great opportunity for anyone looking to expand their Kubernetes involvement.&lt;/p>
&lt;p>&lt;strong>Arujjwal:&lt;/strong> Are there any specific skills you&amp;rsquo;re looking for that we should highlight? To give you
an example from our own [SIG ContribEx]
(&lt;a href="https://github.com/kubernetes/community/blob/master/sig-contributor-experience/README.md)">https://github.com/kubernetes/community/blob/master/sig-contributor-experience/README.md)&lt;/a>:
if you&amp;rsquo;re an expert in &lt;a href="https://gohugo.io/">Hugo&lt;/a>, we can always use some help with k8s.dev!&lt;/p>
&lt;p>&lt;strong>Michael:&lt;/strong> The SIG is currently working through the final phases of our extraction and migration
process, but we are looking toward the future and starting to plan what will come next. One of the
big topics that the SIG has discussed is testing. Currently, we do not have a generic common set of
tests that can be exercised by each cloud provider to confirm the behaviour of their controller
manager. If you are an expert in Ginkgo and the Kubetest framework, we could probably use your help
in designing and implementing the new tests.&lt;/p>
&lt;hr>
&lt;p>This is where the conversation ends. I hope this gave you some insights about SIG Cloud Provider&amp;rsquo;s
aim and working. This is just the tip of the iceberg. To know more and get involved with SIG Cloud
Provider, try attending their meetings
&lt;a href="https://github.com/kubernetes/community/blob/master/sig-cloud-provider/README.md#meetings">here&lt;/a>.&lt;/p></description></item><item><title>Blog: A look into the Kubernetes Book Club</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/02/22/k8s-book-club/</link><pubDate>Thu, 22 Feb 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/02/22/k8s-book-club/</guid><description>
&lt;p>Learning Kubernetes and the entire ecosystem of technologies around it is not without its
challenges. In this interview, we will talk with &lt;a href="https://www.linkedin.com/in/csantanapr/">Carlos Santana
(AWS)&lt;/a> to learn a bit more about how he created the
&lt;a href="https://community.cncf.io/kubernetes-virtual-book-club/">Kubernetes Book Club&lt;/a>, how it works, and
how anyone can join in to take advantage of a community-based learning experience.&lt;/p>
&lt;p>&lt;img alt="Carlos Santana speaking at KubeCon NA 2023" src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/k8s-book-club/csantana_k8s_book_club.jpg">&lt;/p>
&lt;p>&lt;strong>Frederico Muñoz (FSM)&lt;/strong>: Hello Carlos, thank you so much for your availability. To start with,
could you tell us a bit about yourself?&lt;/p>
&lt;p>&lt;strong>Carlos Santana (CS)&lt;/strong>: Of course. My experience in deploying Kubernetes in production six
years ago opened the door for me to join &lt;a href="https://knative.dev/">Knative&lt;/a> and then contribute to
Kubernetes through the Release Team. Working on upstream Kubernetes has been one of the best
experiences I&amp;rsquo;ve had in open-source. Over the past two years, in my role as a Senior Specialist
Solutions Architect at AWS, I have been assisting large enterprises build their internal developer
platforms (IDP) on top of Kubernetes. Going forward, my open source contributions are directed
towards &lt;a href="https://cnoe.io/">CNOE&lt;/a> and CNCF projects like &lt;a href="https://github.com/argoproj">Argo&lt;/a>,
&lt;a href="https://www.crossplane.io/">Crossplane&lt;/a>, and &lt;a href="https://www.cncf.io/projects/backstage/">Backstage&lt;/a>.&lt;/p>
&lt;h2 id="creating-the-book-club">Creating the Book Club&lt;/h2>
&lt;p>&lt;strong>FSM&lt;/strong>: So your path led you to Kubernetes, and at that point what was the motivating factor for
starting the Book Club?&lt;/p>
&lt;p>&lt;strong>CS&lt;/strong>: The idea for the Kubernetes Book Club sprang from a casual suggestion during a
&lt;a href="https://github.com/vmware-archive/tgik">TGIK&lt;/a> livestream. For me, it was more than just about
reading a book; it was about creating a learning community. This platform has not only been a source
of knowledge but also a support system, especially during the challenging times of the
pandemic. It&amp;rsquo;s gratifying to see how this initiative has helped members cope and grow. The first
book &lt;a href="https://www.oreilly.com/library/view/production-kubernetes/9781492092292/">Production
Kubernetes&lt;/a> took 36
weeks, when we started on March 5th 2021. Currently don&amp;rsquo;t take that long to cover a book, one or two
chapters per week.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: Could you describe the way the Kubernetes Book Club works? How do you select the books and how
do you go through them?&lt;/p>
&lt;p>&lt;strong>CS&lt;/strong>: We collectively choose books based on the interests and needs of the group. This practical
approach helps members, especially beginners, grasp complex concepts more easily. We have two weekly
series, one for the EMEA timezone, and I organize the US one. Each organizer works with their co-host
and picks a book on Slack, then sets up a lineup of hosts for a couple of weeks to discuss each
chapter.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: If I’m not mistaken, the Kubernetes Book Club is in its 17th book, which is significant: is
there any secret recipe for keeping things active?&lt;/p>
&lt;p>&lt;strong>CS&lt;/strong>: The secret to keeping the club active and engaging lies in a couple of key factors.&lt;/p>
&lt;p>Firstly, consistency has been crucial. We strive to maintain a regular schedule, only cancelling
meetups for major events like holidays or KubeCon. This regularity helps members stay engaged and
builds a reliable community.&lt;/p>
&lt;p>Secondly, making the sessions interesting and interactive has been vital. For instance, I often
introduce pop-up quizzes during the meetups, which not only tests members&amp;rsquo; understanding but also
adds an element of fun. This approach keeps the content relatable and helps members understand how
theoretical concepts are applied in real-world scenarios.&lt;/p>
&lt;h2 id="topics-covered-in-the-book-club">Topics covered in the Book Club&lt;/h2>
&lt;p>&lt;strong>FSM&lt;/strong>: The main topics of the books have been Kubernetes, GitOps, Security, SRE, and
Observability: is this a reflection of the cloud native landscape, especially in terms of
popularity?&lt;/p>
&lt;p>&lt;strong>CS&lt;/strong>: Our journey began with &amp;lsquo;Production Kubernetes&amp;rsquo;, setting the tone for our focus on practical,
production-ready solutions. Since then, we&amp;rsquo;ve delved into various aspects of the CNCF landscape,
aligning our books with a different theme. Each theme, whether it be Security, Observability, or
Service Mesh, is chosen based on its relevance and demand within the community. For instance, in our
recent themes on Kubernetes Certifications, we brought the book authors into our fold as active
hosts, enriching our discussions with their expertise.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: I know that the project had recent changes, namely being integrated into the CNCF as a
&lt;a href="https://community.cncf.io/">Cloud Native Community Group&lt;/a>. Could you talk a bit about this change?&lt;/p>
&lt;p>&lt;strong>CS&lt;/strong>: The CNCF graciously accepted the book club as a Cloud Native Community Group. This is a
significant development that has streamlined our operations and expanded our reach. This alignment
has been instrumental in enhancing our administrative capabilities, similar to those used by
Kubernetes Community Days (KCD) meetups. Now, we have a more robust structure for memberships, event
scheduling, mailing lists, hosting web conferences, and recording sessions.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: How has your involvement with the CNCF impacted the growth and engagement of the Kubernetes
Book Club over the past six months?&lt;/p>
&lt;p>&lt;strong>CS&lt;/strong>: Since becoming part of the CNCF community six months ago, we&amp;rsquo;ve witnessed significant
quantitative changes within the Kubernetes Book Club. Our membership has surged to over 600 members,
and we&amp;rsquo;ve successfully organized and conducted more than 40 events during this period. What&amp;rsquo;s even
more promising is the consistent turnout, with an average of 30 attendees per event. This growth and
engagement are clear indicators of the positive influence of our CNCF affiliation on the Kubernetes
Book Club&amp;rsquo;s reach and impact in the community.&lt;/p>
&lt;h2 id="joining-the-book-club">Joining the Book Club&lt;/h2>
&lt;p>&lt;strong>FSM&lt;/strong>: For anyone wanting to join, what should they do?&lt;/p>
&lt;p>&lt;strong>CS&lt;/strong>: There are three steps to join:&lt;/p>
&lt;ul>
&lt;li>First, join the &lt;a href="https://community.cncf.io/kubernetes-virtual-book-club/">Kubernetes Book Club
Community&lt;/a>&lt;/li>
&lt;li>Then RSVP to the
&lt;a href="https://community.cncf.io/kubernetes-virtual-book-club/">events&lt;/a>
on the community page&lt;/li>
&lt;li>Lastly, join the CNCF Slack channel
&lt;a href="https://cloud-native.slack.com/archives/C05EYA14P37">#kubernetes-book-club&lt;/a>.&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>FSM&lt;/strong>: Excellent, thank you! Any final comments you would like to share?&lt;/p>
&lt;p>&lt;strong>CS&lt;/strong>: The Kubernetes Book Club is more than just a group of professionals discussing books; it&amp;rsquo;s a
vibrant community and amazing volunteers that help organize and host &lt;a href="https://www.linkedin.com/in/neependra/">Neependra
Khare&lt;/a>, &lt;a href="https://www.linkedin.com/in/ericsmalling/">Eric
Smalling&lt;/a>, &lt;a href="https://www.linkedin.com/in/sevikarakulak/">Sevi
Karakulak&lt;/a>, &lt;a href="https://www.linkedin.com/in/chadmcrowell/">Chad
M. Crowell&lt;/a>, and &lt;a href="https://www.linkedin.com/in/walidshaari/">Walid (CNJ)
Shaari&lt;/a>. Look us up at KubeCon and get your Kubernetes
Book Club sticker!&lt;/p></description></item><item><title>Blog: Spotlight on SIG Release (Release Team Subproject)</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/01/15/sig-release-spotlight-2023/</link><pubDate>Mon, 15 Jan 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/01/15/sig-release-spotlight-2023/</guid><description>
&lt;p>The Release Special Interest Group (SIG Release), where Kubernetes sharpens its blade
with cutting-edge features and bug fixes every 4 months. Have you ever considered how such a big
project like Kubernetes manages its timeline so efficiently to release its new version, or how
the internal workings of the Release Team look like? If you&amp;rsquo;re curious about these questions or
want to know more and get involved with the work SIG Release does, read on!&lt;/p>
&lt;p>SIG Release plays a crucial role in the development and evolution of Kubernetes.
Its primary responsibility is to manage the release process of new versions of Kubernetes.
It operates on a regular release cycle, &lt;a href="https://www.kubernetes.dev/resources/release/">typically every three to four months&lt;/a>.
During this cycle, the Kubernetes Release Team works closely with other SIGs and contributors
to ensure a smooth and well-coordinated release. This includes planning the release schedule, setting deadlines for code freeze and testing
phases, as well as creating release artefacts like binaries, documentation, and release notes.&lt;/p>
&lt;p>Before you read further, it is important to note that there are two subprojects under SIG
Release - &lt;em>Release Engineering&lt;/em> and &lt;em>Release Team&lt;/em>.&lt;/p>
&lt;p>In this blog post, &lt;a href="https://twitter.com/nitishfy">Nitish Kumar&lt;/a> interviews Verónica
López (PlanetScale), Technical Lead of SIG Release, with the spotlight on the Release Team
subproject, how the release process looks like, and ways to get involved.&lt;/p>
&lt;ol>
&lt;li>
&lt;p>&lt;strong>What is the typical release process for a new version of Kubernetes, from initial planning
to the final release? Are there any specific methodologies and tools that you use to ensure a smooth release?&lt;/strong>&lt;/p>
&lt;p>The release process for a new Kubernetes version is a well-structured and community-driven
effort. There are no specific methodologies or
tools as such that we follow, except a calendar with a series of steps to keep things organised.
The complete release process looks like this:&lt;/p>
&lt;/li>
&lt;/ol>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Release Team Onboarding:&lt;/strong> We start with the formation of a Release Team, which includes
volunteers from the Kubernetes community who will be responsible for managing different
components of the new release. This is typically done before the previous release is about to
wrap up. Once the team is formed, new members are onboarded while the Release Team Lead and
the Branch Manager propose a calendar for the usual deliverables. As an example, you can take a look
at &lt;a href="https://github.com/kubernetes/sig-release/issues/2307">the v1.29 team formation issue&lt;/a> created at the SIG Release
repository. For a contributor to be the part of Release Team, they typically go through the
Release Shadow program, but that&amp;rsquo;s not the only way to get involved with SIG Release.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Beginning Phase:&lt;/strong> In the initial weeks of each release cycle, SIG Release diligently
tracks the progress of new features and enhancements outlined in Kubernetes Enhancement
Proposals (KEPs). While not all of these features are entirely new, they often commence
their journey in the alpha phase, subsequently advancing to the beta stage, and ultimately
attaining the status of stability.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Feature Maturation Phase:&lt;/strong> We usually cut a couple of Alpha releases, containing new
features in an experimental state, to gather feedback from the community, followed by a
couple of Beta releases, where features are more stable and the focus is on fixing bugs. Feedback
from users is critical at this stage, to the point where sometimes we need to cut an
additional Beta release to address bugs or other concerns that may arise during this phase. Once
this is cleared, we cut a &lt;em>release candidate&lt;/em> (RC) before the actual release. Throughout
the cycle, efforts are made to update and improve documentation, including release notes
and user guides, a process that, in my opinion, deserves its own post.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Stabilisation Phase:&lt;/strong> A few weeks before the new release, we implement a &lt;em>code freeze&lt;/em>, and
no new features are allowed after this point: this allows the focus to shift towards testing
and stabilisation. In parallel to the main release, we keep cutting monthly patches of old,
officially supported versions of Kubernetes, so you could say that the lifecycle of a Kubernetes
version extends for several months afterwards. Throughout the complete release cycle, efforts
are made to update and improve documentation, including release notes and user guides, a
process that, in our opinion, deserves its own post.&lt;/p>
&lt;figure>
&lt;img src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/sig-release-spotlight/sig-release-overview.png"
alt="Release team onboarding; beginning phase; stabalization phase; feature maturation phase"/>
&lt;/figure>
&lt;/li>
&lt;/ul>
&lt;ol start="2">
&lt;li>
&lt;p>&lt;strong>How do you handle the balance between stability and introducing new features in each
release? What criteria are used to determine which features make it into a release?&lt;/strong>&lt;/p>
&lt;p>It’s a neverending mission, however, we think
that the key is in respecting our process and guidelines. Our guidelines are the result of
hours of discussions and feedback from dozens of members of the community who bring a wealth of knowledge and experience to the project. If we
didn’t have strict guidelines, we would keep having the same discussions over and over again,
instead of using our time for more productive topics that needs our attention. All the
critical exceptions require consensus from most of the team members, so we can ensure quality.&lt;/p>
&lt;p>The process of deciding what makes it into a release starts way before the Release Teams
takes over the workflows. Each individual SIG along with the most experienced contributors
gets to decide whether they’d like to include a feature or change, so the planning and ultimate
approval usually belongs to them. Then, the Release Team makes sure those contributions meet
the requirements of documentation, testing, backwards compatibility, among others, before
officially allowing them in. A similar process happens with cherry-picks for the monthly patch
releases, where we have strict policies about not accepting PRs that would require a full KEP,
or fixes that don’t include all the affected branches.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>What are some of the most significant challenges you’ve encountered while developing
and releasing Kubernetes? How have you overcome these challenges?&lt;/strong>&lt;/p>
&lt;p>Every cycle of release brings its own array of
challenges. It might involve tackling last-minute concerns like newly discovered Common Vulnerabilities and Exposures (CVEs),
resolving bugs within our internal tools, or addressing unexpected regressions caused by
features from previous releases. Another obstacle we often face is that, although our
team is substantial, most of us contribute on a volunteer basis. Sometimes it can feel like
we’re a bit understaffed, however we always manage to get organised and make it work.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>As a new contributor, what should be my ideal path to get involved with SIG Release? In
a community where everyone is busy with their own tasks, how can I find the right set of tasks to contribute effectively to it?&lt;/strong>&lt;/p>
&lt;p>Everyone&amp;rsquo;s way of getting involved within the Open Source community is different. SIG Release
is a self-serving team, meaning that we write our own tools to be able to ship releases. We
collaborate a lot with other SIGs, such as &lt;a href="https://github.com/kubernetes/community/blob/master/sig-k8s-infra/README.md">SIG K8s Infra&lt;/a>, but all the tools that we used needs to be
tailor-made for our massive technical needs, while reducing costs. This means that we are
constantly looking for volunteers who’d like to help with different types of projects, beyond “just” cutting a release.&lt;/p>
&lt;p>Our current project requires a mix of skills like &lt;a href="https://go.dev/">Go&lt;/a> programming,
understanding Kubernetes internals, Linux packaging, supply chain security, technical
writing, and general open-source project maintenance. This skill set is always evolving as our project grows.&lt;/p>
&lt;p>For an ideal path, this is what we suggest:&lt;/p>
&lt;ul>
&lt;li>Get yourself familiar with the code, including how features are managed, the release calendar, and the overall structure of the Release Team.&lt;/li>
&lt;li>Join the Kubernetes community communication channels, such as &lt;a href="https://communityinviter.com/apps/kubernetes/community">Slack&lt;/a> (#sig-release), where we are particularly active.&lt;/li>
&lt;li>Join the &lt;a href="https://github.com/kubernetes/community/tree/master/sig-release#meetings">SIG Release weekly meetings&lt;/a>
which are open to all in the community. Participating in these meetings is a great way to learn about ongoing and future projects that
you might find relevant for your skillset and interests.&lt;/li>
&lt;/ul>
&lt;p>Remember, every experienced contributor was once in your shoes, and the community is often more than willing to guide and support newcomers.
Don&amp;rsquo;t hesitate to ask questions, engage in discussions, and take small steps to contribute.&lt;/p>
&lt;figure>
&lt;img src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/sig-release-spotlight/sig-release-meetings.png" alt="sig-release-questions">&lt;/img>
&lt;/figure>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>What is the Release Shadow Program and how is it different from other shadow programs included in various other SIGs?&lt;/strong>&lt;/p>
&lt;p>The Release Shadow Program offers a chance for interested individuals to shadow experienced
members of the Release Team throughout a Kubernetes release cycle. This is a unique chance to see all the hard work that a
Kubernetes release requires across sub-teams. A lot of people think that all we do is cut a release every three months, but that’s just the
top of the iceberg.&lt;/p>
&lt;p>Our program typically aligns with a specific Kubernetes release cycle, which has a
predictable timeline of approximately three months. While this program doesn’t involve writing new Kubernetes features, it still
requires a high sense of responsibility since the Release Team is the last step between a new release and thousands of contributors, so it’s a
great opportunity to learn a lot about modern software development cycles at an accelerated pace.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>What are the qualifications that you generally look for in a person to volunteer as a release shadow/release lead for the next Kubernetes release?&lt;/strong>&lt;/p>
&lt;p>While all the roles require some degree of technical ability, some require more hands-on
experience with Go and familiarity with the Kubernetes API while others require people who
are good at communicating technical content in a clear and concise way. It’s important to mention that we value enthusiasm and commitment over
technical expertise from day 1. If you have the right attitude and show us that you enjoy working with Kubernetes and or/release
engineering, even if it’s only through a personal project that you put together in your spare time, the team will make sure to guide
you. Being a self-starter and not being afraid to ask questions can take you a long way in our team.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>What will you suggest to someone who has got rejected from being a part of the Release Shadow Program several times?&lt;/strong>&lt;/p>
&lt;p>Keep applying.&lt;/p>
&lt;p>With every release cycle we have had an exponential growth in the number of applicants,
so it gets harder to be selected, which can be discouraging, but please know that getting rejected doesn’t mean you’re not talented. It’s
just practically impossible to accept every applicant, however here&amp;rsquo;s an alternative that we suggest:&lt;/p>
&lt;p>&lt;em>Start attending our weekly Kubernetes SIG Release meetings to introduce yourself and get familiar with the team and the projects we are working on.&lt;/em>&lt;/p>
&lt;p>The Release Team is one of the way to join SIG Release, but we are always looking for more hands to help. Again, in addition to certain
technical ability, the most sought after trait that we look for is people we can trust, and that requires time.&lt;/p>
&lt;figure>
&lt;img src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/sig-release-spotlight/sig-release-motivation.png" alt="sig-release-motivation">&lt;/img>
&lt;/figure>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Can you discuss any ongoing initiatives or upcoming features that the release team is particularly excited about for Kubernetes v1.28? How do these advancements align with the long-term vision of Kubernetes?&lt;/strong>&lt;/p>
&lt;p>We are excited about finally publishing Kubernetes packages on community infrastructure. It has been something that we have been wanting to do for a few years now, but it’s a project
with many technical implications that must be in place before doing the transition. Once that’s done, we’ll be able to increase our productivity and take control of the entire workflows.&lt;/p>
&lt;/li>
&lt;/ol>
&lt;h2 id="final-thoughts">Final thoughts&lt;/h2>
&lt;p>Well, this conversation ends here but not the learning. I hope this interview has given you some idea about what SIG Release does and how to
get started in helping out. It is important to mention again that this article covers the first subproject under SIG Release, the Release Team.
In the next Spotlight blog on SIG Release, we will provide a spotlight on the Release Engineering subproject, what it does and how to
get involved. Finally, you can go through the &lt;a href="https://github.com/kubernetes/community/tree/master/sig-release">SIG Release charter&lt;/a> to get a more in-depth understanding of how SIG Release operates.&lt;/p></description></item><item><title>Blog: Blixt - A load-balancer written in Rust, using eBPF, born from Gateway API</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/01/08/blixt-load-balancer-rust-ebpf-gateway-api/</link><pubDate>Mon, 08 Jan 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/01/08/blixt-load-balancer-rust-ebpf-gateway-api/</guid><description>
&lt;p>In &lt;a href="https://github.com/kubernetes/community/tree/master/sig-network">SIG Network&lt;/a> we now have a layer 4 (“L4”) load balancer named &lt;a href="https://github.com/kubernetes-sigs/blixt">Blixt&lt;/a>. This
project started as a fun experiment using emerging technologies and is intended
to become a utility for CI and testing to help facilitate the continued
development of &lt;a href="https://kubernetes.io/docs/concepts/services-networking/gateway/">Gateway API&lt;/a>. Are you interested in developing networking
tools in &lt;a href="https://www.rust-lang.org/">Rust&lt;/a> and &lt;a href="https://www.kernel.org/doc/html/latest/bpf/index.html">eBPF&lt;/a>? Or perhaps you&amp;rsquo;re specifically
interested in Gateway API? We&amp;rsquo;ll tell you a bit about the project and how it
might benefit you.&lt;/p>
&lt;h2 id="history">History&lt;/h2>
&lt;p>&lt;a href="https://github.com/kubernetes-sigs/blixt">Blixt&lt;/a> originated at &lt;a href="https://konghq.com">Kong&lt;/a> as an experiment to test
load-balancing ingress traffic for Kubernetes clusters using eBPF for the
dataplane. Around the time of Kubecon Detroit (2022) we (the &lt;a href="https://kubernetes.io/docs/concepts/services-networking/gateway/">Gateway
API&lt;/a> maintainers) realized it had significant potential to help us move
our TCPRoute and UDPRoute support forward, which had been sort of &amp;ldquo;stuck in
alpha&amp;rdquo; at the time due to a lack of conformance tests being developed for them.
At the same time, various others in the SIG Network community developed an
interest in the project due to the rapid growth of eBPFs use on Kubernetes.
Given the potential for benefit to the Kubernetes ecosystem and the growing
interest, Kong decided it would be helpful to &lt;a href="https://github.com/kubernetes/org/issues/3875">donate the project to Kubernetes
SIGs&lt;/a> to benefit upstream Kubernetes.&lt;/p>
&lt;p>Over several months we rewrote the project in &lt;a href="https://www.rust-lang.org/">Rust&lt;/a> (from C), due to a
strong contingency of Rust knowledge (and interest) between us developing the
project and an active interest in the burgeoning &lt;a href="https://aya-rs.dev/">Aya project&lt;/a> (a Rust
framework for developing eBPF programs). We did eventually move the
control plane (specifically) to &lt;a href="https://go.dev">Golang&lt;/a> however, so that we could take
advantage of the &lt;a href="https://book.kubebuilder.io/">Kubebuilder&lt;/a> and &lt;a href="https://github.com/kubernetes-sigs/controller-runtime">controller-runtime&lt;/a> ecosystems.
Additionally, we augmented our custom program loader (in eBPF, you generally
write &lt;em>loaders&lt;/em> that load your BPF byte code into the kernel) with
&lt;a href="https://bpfman.io/">bpfman&lt;/a>: a project adjacent to us in the Rust + eBPF ecosystem, which
helps solve several security and ergonomic problems with managing BPF programs on
Linux systems.&lt;/p>
&lt;p>After the recently completed &lt;a href="https://github.com/cncf/foundation/issues/474">license review process&lt;/a>, which provided a blanket
exception for the use of dual licensed eBPF in CNCF code, the project
became officially part of Kubernetes and interest has been growing. We have several
goals for the project which revolve around the continued development of Gateway
API, with a specific focus on helping mature Layer 4 support (e.g. the UDPRoute
and TCPRoute API kinds).&lt;/p>
&lt;h2 id="goals">Goals&lt;/h2>
&lt;p>Currently the high level goal of the project is to provide a &lt;a href="https://kubernetes.io/docs/concepts/services-networking/gateway/">Gateway
API&lt;/a> driven load-balancer for non-production use cases. Those
non-production use cases include:&lt;/p>
&lt;ul>
&lt;li>Driving conformance tests and adoption for L4 use cases.&lt;/li>
&lt;li>Using this implementation as part of the Gateway API CI testing strategy.&lt;/li>
&lt;li>Having the Blixt control-plane be a reference implementation.&lt;/li>
&lt;li>Exploring issues associated with the generic use of eBPF in Kubernetes.&lt;/li>
&lt;/ul>
&lt;p>In support of those goals we have some more specific sub-goals we&amp;rsquo;re actively
working towards:&lt;/p>
&lt;ul>
&lt;li>Support &lt;a href="https://gateway-api.sigs.k8s.io/api-types/gatewayclass/">GatewayClass&lt;/a> and &lt;a href="https://gateway-api.sigs.k8s.io/api-types/gateway/">Gateway&lt;/a>, meeting &lt;a href="https://gateway-api.sigs.k8s.io/concepts/conformance/">conformance
requirements&lt;/a>&lt;/li>
&lt;li>Support &lt;a href="https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1alpha2.UDPRoute">UDPRoute&lt;/a> and &lt;a href="https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1alpha2.TCPRoute">TCPRoute&lt;/a>, meanwhile helping to develop the
conformance requirements for these APIs.&lt;/li>
&lt;/ul>
&lt;p>We have some significant progress on the above in that we have a &lt;strong>basic&lt;/strong>
level of support for creating a GatewayClass and at least one Gateway, and
then attaching UDPRoute and TCPRoute resources to that Gateway. Doing that drives a
process of the underlying data plane
receiving corresponding configuration and traffic then flowing as expected. We
emphasize the &lt;strong>basic&lt;/strong>: as the project is still quite early on, and being
developed in a highly iterative fashion. That said the fundamentals are there
and you can try them out yourself on a local system using our &lt;a href="https://github.com/kubernetes-sigs/blixt#usage">usage
documentation&lt;/a> and &lt;a href="https://github.com/kubernetes-sigs/blixt/tree/main/config/samples">sample configurations&lt;/a>. You can see more
about the project&amp;rsquo;s &lt;a href="https://github.com/kubernetes-sigs/blixt#current-status">current status on the README.md&lt;/a> including the
milestones and current progress.&lt;/p>
&lt;p>One thing that can&amp;rsquo;t be overstated about this project is that it has been at the
center of a lot of learning, community building and fun. We have maintained a
policy with this project that it shall never be intended for production use
cases which means development of the project is more of a sandbox and a safe
space for people to learn and experiment. If any of this sounds interesting to
you, now is a great time to get involved!&lt;/p>
&lt;h2 id="getting-involved">Getting involved&lt;/h2>
&lt;p>If you&amp;rsquo;re interested in networking, Rust, Linux, eBPF (or all of the above)
there&amp;rsquo;s a lot of opportunity here to learn and have fun. We invite you to jump
right in on the &lt;a href="https://github.com/kubernetes-sigs/blixt">repository&lt;/a> if that&amp;rsquo;s your style, or reach out to us in
the community: You can reach us on &lt;a href="https://kubernetes.slack.com">Kubernetes Slack&lt;/a> on the
&lt;code>#sig-network-gateway-api&lt;/code> channel as well as the &lt;code>#ebpf&lt;/code> channel. Blixt is a
topic of discussion at the &lt;a href="https://gateway-api.sigs.k8s.io/contributing/#meetings">Gateway API community meetings&lt;/a>, and the
monthly &lt;a href="https://github.com/kubernetes/community/tree/master/sig-network#meetings">SIG Network Code Jam&lt;/a> as well.&lt;/p></description></item><item><title>Blog: Kubernetes supports running kube-proxy in an unprivileged container</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/01/05/kube-proxy-non-privileged/</link><pubDate>Fri, 05 Jan 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2024/01/05/kube-proxy-non-privileged/</guid><description>
&lt;p>This post describes how the &lt;code>--init-only&lt;/code> flag to &lt;code>kube-proxy&lt;/code> can be
used to run the main kube-proxy container in a stricter
&lt;code>securityContext&lt;/code>, by performing the configuration that requires
privileged mode in a separate &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/">init container&lt;/a>.
Since
Windows doesn&amp;rsquo;t have the equivalent of &lt;code>capabilities&lt;/code>, this only works
on Linux.&lt;/p>
&lt;p>The &lt;code>kube-proxy&lt;/code> Pod still only meets the &lt;em>privileged&lt;/em> &lt;a href="https://kubernetes.io/docs/concepts/security/pod-security-standards/">Pod Security
Standard&lt;/a>,
but there is still an improvement because the running container doesn&amp;rsquo;t
need to run privileged.&lt;/p>
&lt;p>Please note that &lt;code>kube-proxy&lt;/code> can be installed in different ways. The
examples below assume that kube-proxy is run from a pod, but similar
changes could be made in clusters where it is run as a system service.&lt;/p>
&lt;h2 id="background">Background&lt;/h2>
&lt;p>It is undesirable to run a server container like &lt;code>kube-proxy&lt;/code> in
privileged mode. Security aware users wants to use capabilities instead.&lt;/p>
&lt;p>If &lt;code>kube-proxy&lt;/code> is installed as a POD, the initialization requires
&amp;ldquo;privileged&amp;rdquo; mode, mostly for setting sysctl&amp;rsquo;s. However, &lt;code>kube-proxy&lt;/code>
only tries to set the sysctl&amp;rsquo;s if they don&amp;rsquo;t already have the right
values. In theory, then, if a privileged &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/">init container&lt;/a>
set the sysctls to the right values, then &lt;code>kube-proxy&lt;/code> could run
unprivileged.&lt;/p>
&lt;p>The problem is to know &lt;em>what&lt;/em> to setup. Until now the only option has
been to read the source to see what changes &lt;code>kube-proxy&lt;/code> would have
made, but with &lt;code>--init-only&lt;/code> you can have &lt;code>kube-proxy&lt;/code> itself do the setup
&lt;em>exactly&lt;/em> as on a normal start, and then exit.&lt;/p>
&lt;h2 id="initializing-kube-proxy-in-an-init-container">Initializing kube-proxy in an init container&lt;/h2>
&lt;p>&lt;em>The example manifests below are not complete, but narrowed down to what is
essential to illustrate the function.&lt;/em>&lt;/p>
&lt;p>Usually, cluster operators run &lt;code>kube-proxy&lt;/code> in a privileged security context.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-yaml" data-lang="yaml">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#008000;font-weight:bold">apiVersion&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>apps/v1&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#008000;font-weight:bold">kind&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>DaemonSet&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#008000;font-weight:bold">metadata&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">labels&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">k8s-app&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>kube-proxy&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#008000;font-weight:bold">spec&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">template&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">spec&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">containers&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#008000;font-weight:bold">name&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>kube-proxy&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">command&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>- /usr/local/bin/kube-proxy&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>- --config=/var/lib/kube-proxy/config.conf&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>- --hostname-override=$(NODE_NAME)&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">securityContext&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">privileged&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#a2f;font-weight:bold">true&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#080;font-style:italic"># (lots of stuff omitted here...)&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>But now it is possible to use:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-yaml" data-lang="yaml">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#008000;font-weight:bold">apiVersion&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>apps/v1&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#008000;font-weight:bold">kind&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>DaemonSet&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#008000;font-weight:bold">metadata&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">labels&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">k8s-app&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>kube-proxy&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb">&lt;/span>&lt;span style="color:#008000;font-weight:bold">spec&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">template&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">spec&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">initContainers&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#008000;font-weight:bold">name&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>kube-proxy-init&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">command&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>- /usr/local/bin/kube-proxy&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>- --config=/var/lib/kube-proxy/config.conf&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>- --hostname-override=$(NODE_NAME)&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>- --init-only&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">securityContext&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">privileged&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#a2f;font-weight:bold">true&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#080;font-style:italic"># (lots of stuff omitted here...)&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">containers&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>- &lt;span style="color:#008000;font-weight:bold">name&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>kube-proxy&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">command&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>- /usr/local/bin/kube-proxy&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>- --config=/var/lib/kube-proxy/config.conf&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>- --hostname-override=$(NODE_NAME)&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">securityContext&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">capabilities&lt;/span>:&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#008000;font-weight:bold">add&lt;/span>:&lt;span style="color:#bbb"> &lt;/span>[&lt;span style="color:#b44">&amp;#34;NET_ADMIN&amp;#34;&lt;/span>]&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bbb"> &lt;/span>&lt;span style="color:#080;font-style:italic"># (lots of stuff omitted here...)&lt;/span>&lt;span style="color:#bbb">
&lt;/span>&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="summary">Summary&lt;/h2>
&lt;p>The &lt;code>--init-only&lt;/code> flag can be used to perform privileged
initialization in an init container and run the main container with
&lt;code>NET_ADMIN&lt;/code> capabilities only. Installers like &lt;code>kubeadm&lt;/code> will likely be
altered to use this feature in the future.&lt;/p></description></item><item><title>Blog: Contextual logging in Kubernetes 1.29: Better troubleshooting and enhanced logging</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/12/20/contextual-logging/</link><pubDate>Wed, 20 Dec 2023 09:30:00 -0800</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/12/20/contextual-logging/</guid><description>
&lt;p>On behalf of the &lt;a href="https://github.com/kubernetes/community/blob/master/wg-structured-logging/README.md">Structured Logging Working Group&lt;/a>
and &lt;a href="https://github.com/kubernetes/community/tree/master/sig-instrumentation#readme">SIG Instrumentation&lt;/a>,
we are pleased to announce that the contextual logging feature
introduced in Kubernetes v1.24 has now been successfully migrated to
two components (kube-scheduler and kube-controller-manager)
as well as some directories. This feature aims to provide more useful logs
for better troubleshooting of Kubernetes and to empower developers to enhance Kubernetes.&lt;/p>
&lt;h2 id="what-is-contextual-logging">What is contextual logging?&lt;/h2>
&lt;p>&lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging">Contextual logging&lt;/a>
is based on the &lt;a href="https://github.com/go-logr/logr#a-minimal-logging-api-for-go">go-logr&lt;/a> API.
The key idea is that libraries are passed a logger instance by their caller
and use that for logging instead of accessing a global logger.
The binary decides the logging implementation, not the libraries.
The go-logr API is designed around structured logging and supports attaching
additional information to a logger.&lt;/p>
&lt;p>This enables additional use cases:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>The caller can attach additional information to a logger:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://pkg.go.dev/github.com/go-logr/logr#Logger.WithName">WithName&lt;/a> adds a &amp;ldquo;logger&amp;rdquo; key with the names concatenated by a dot as value&lt;/li>
&lt;li>&lt;a href="https://pkg.go.dev/github.com/go-logr/logr#Logger.WithValues">WithValues&lt;/a> adds key/value pairs&lt;/li>
&lt;/ul>
&lt;p>When passing this extended logger into a function, and the function uses it
instead of the global logger, the additional information is then included
in all log entries, without having to modify the code that generates the log entries.
This is useful in highly parallel applications where it can become hard to identify
all log entries for a certain operation, because the output from different operations gets interleaved.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>When running unit tests, log output can be associated with the current test.
Then, when a test fails, only the log output of the failed test gets shown by go test.
That output can also be more verbose by default because it will not get shown for successful tests.
Tests can be run in parallel without interleaving their output.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>One of the design decisions for contextual logging was to allow attaching a logger as value to a &lt;code>context.Context&lt;/code>.
Since the logger encapsulates all aspects of the intended logging for the call,
it is &lt;em>part&lt;/em> of the context, and not just &lt;em>using&lt;/em> it. A practical advantage is that many APIs
already have a &lt;code>ctx&lt;/code> parameter or can add one. This provides additional advantages, like being able to
get rid of &lt;code>context.TODO()&lt;/code> calls inside the functions.&lt;/p>
&lt;h2 id="how-to-use-it">How to use it&lt;/h2>
&lt;p>The contextual logging feature is alpha starting from Kubernetes v1.24,
so it requires the &lt;code>ContextualLogging&lt;/code> &lt;a href="https://deploy-preview-670--kubernetes-contributor.netlify.app/docs/reference/command-line-tools-reference/feature-gates/">feature gate&lt;/a> to be enabled.
If you want to test the feature while it is alpha, you need to enable this feature gate
on the &lt;code>kube-controller-manager&lt;/code> and the &lt;code>kube-scheduler&lt;/code>.&lt;/p>
&lt;p>For the &lt;code>kube-scheduler&lt;/code>, there is one thing to note, in addition to enabling
the &lt;code>ContextualLogging&lt;/code> feature gate, instrumentation also depends on log verbosity.
To avoid slowing down the scheduler with the logging instrumentation for contextual logging added for 1.29,
it is important to choose carefully when to add additional information:&lt;/p>
&lt;ul>
&lt;li>At &lt;code>-v3&lt;/code> or lower, only &lt;code>WithValues(&amp;quot;pod&amp;quot;)&lt;/code> is used once per scheduling cycle.
This has the intended effect that all log messages for the cycle include the pod information.
Once contextual logging is GA, &amp;ldquo;pod&amp;rdquo; key/value pairs can be removed from all log calls.&lt;/li>
&lt;li>At &lt;code>-v4&lt;/code> or higher, richer log entries get produced where &lt;code>WithValues&lt;/code> is also used for the node (when applicable)
and &lt;code>WithName&lt;/code> is used for the current operation and plugin.&lt;/li>
&lt;/ul>
&lt;p>Here is an example that demonstrates the effect:&lt;/p>
&lt;blockquote>
&lt;p>I1113 08:43:37.029524 87144 default_binder.go:53] &amp;ldquo;Attempting to bind pod to node&amp;rdquo; &lt;strong>logger=&amp;ldquo;Bind.DefaultBinder&amp;rdquo;&lt;/strong> &lt;strong>pod&lt;/strong>=&amp;ldquo;kube-system/coredns-69cbfb9798-ms4pq&amp;rdquo; &lt;strong>node&lt;/strong>=&amp;ldquo;127.0.0.1&amp;rdquo;&lt;/p>
&lt;/blockquote>
&lt;p>The immediate benefit is that the operation and plugin name are visible in &lt;code>logger&lt;/code>.
&lt;code>pod&lt;/code> and &lt;code>node&lt;/code> are already logged as parameters in individual log calls in &lt;code>kube-scheduler&lt;/code> code.
Once contextual logging is supported by more packages outside of &lt;code>kube-scheduler&lt;/code>,
they will also be visible there (for example, client-go). Once it is GA,
log calls can be simplified to avoid repeating those values.&lt;/p>
&lt;p>In &lt;code>kube-controller-manager&lt;/code>, &lt;code>WithName&lt;/code> is used to add the user-visible controller name to log output,
for example:&lt;/p>
&lt;blockquote>
&lt;p>I1113 08:43:29.284360 87141 graph_builder.go:285] &amp;ldquo;garbage controller monitor not synced: no monitors&amp;rdquo; &lt;strong>logger=&amp;ldquo;garbage-collector-controller&amp;rdquo;&lt;/strong>&lt;/p>
&lt;/blockquote>
&lt;p>The &lt;code>logger=”garbage-collector-controller”&lt;/code> was added by the &lt;code>kube-controller-manager&lt;/code> core
when instantiating that controller and appears in all of its log entries - at least as long as the code
that it calls supports contextual logging. Further work is needed to convert shared packages like client-go.&lt;/p>
&lt;h2 id="performance-impact">Performance impact&lt;/h2>
&lt;p>Supporting contextual logging in a package, i.e. accepting a logger from a caller, is cheap.
No performance impact was observed for the &lt;code>kube-scheduler&lt;/code>. As noted above,
adding &lt;code>WithName&lt;/code> and &lt;code>WithValues&lt;/code> needs to be done more carefully.&lt;/p>
&lt;p>In Kubernetes 1.29, enabling contextual logging at production verbosity (&lt;code>-v3&lt;/code> or lower)
caused no measurable slowdown for the &lt;code>kube-scheduler&lt;/code> and is not expected for the &lt;code>kube-controller-manager&lt;/code> either.
At debug levels, a 28% slowdown for some test cases is still reasonable given that the resulting logs make debugging easier.
For details, see the &lt;a href="https://github.com/kubernetes/enhancements/pull/4219#issuecomment-1807811995">discussion around promoting the feature to beta&lt;/a>.&lt;/p>
&lt;h2 id="impact-on-downstream-users">Impact on downstream users&lt;/h2>
&lt;p>Log output is not part of the Kubernetes API and changes regularly in each release,
whether it is because developers work on the code or because of the ongoing conversion
to structured and contextual logging.&lt;/p>
&lt;p>If downstream users have dependencies on specific logs,
they need to be aware of how this change affects them.&lt;/p>
&lt;h2 id="further-reading">Further reading&lt;/h2>
&lt;ul>
&lt;li>Read the &lt;a href="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/05/25/contextual-logging/">Contextual Logging in Kubernetes 1.24&lt;/a> article.&lt;/li>
&lt;li>Read the &lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging">KEP-3077: contextual logging&lt;/a>.&lt;/li>
&lt;/ul>
&lt;h2 id="get-involved">Get involved&lt;/h2>
&lt;p>If you&amp;rsquo;re interested in getting involved, we always welcome new contributors to join us.
Contextual logging provides a fantastic opportunity for you to contribute to Kubernetes development and make a meaningful impact.
By joining &lt;a href="https://github.com/kubernetes/community/tree/master/wg-structured-logging">Structured Logging WG&lt;/a>,
you can actively participate in the development of Kubernetes and make your first contribution.
It&amp;rsquo;s a great way to learn and engage with the community while gaining valuable experience.&lt;/p>
&lt;p>We encourage you to explore the repository and familiarize yourself with the ongoing discussions and projects.
It&amp;rsquo;s a collaborative environment where you can exchange ideas, ask questions, and work together with other contributors.&lt;/p>
&lt;p>If you have any questions or need guidance, don&amp;rsquo;t hesitate to reach out to us
and you can do so on our &lt;a href="https://kubernetes.slack.com/messages/wg-structured-logging">public Slack channel&lt;/a>.
If you&amp;rsquo;re not already part of that Slack workspace, you can visit &lt;a href="https://slack.k8s.io/">https://slack.k8s.io/&lt;/a>
for an invitation.&lt;/p>
&lt;p>We would like to express our gratitude to all the contributors who provided excellent reviews,
shared valuable insights, and assisted in the implementation of this feature (in alphabetical order):&lt;/p>
&lt;ul>
&lt;li>Aldo Culquicondor (&lt;a href="https://github.com/alculquicondor">alculquicondor&lt;/a>)&lt;/li>
&lt;li>Andy Goldstein (&lt;a href="https://github.com/ncdc">ncdc&lt;/a>)&lt;/li>
&lt;li>Feruzjon Muyassarov (&lt;a href="https://github.com/fmuyassarov">fmuyassarov&lt;/a>)&lt;/li>
&lt;li>Freddie (&lt;a href="https://github.com/freddie400">freddie400&lt;/a>)&lt;/li>
&lt;li>JUN YANG (&lt;a href="https://github.com/yangjunmyfm192085">yangjunmyfm192085&lt;/a>)&lt;/li>
&lt;li>Kante Yin (&lt;a href="https://github.com/kerthcet">kerthcet&lt;/a>)&lt;/li>
&lt;li>Kiki (&lt;a href="https://github.com/carlory">carlory&lt;/a>)&lt;/li>
&lt;li>Lucas Severo Alve (&lt;a href="https://github.com/knelasevero">knelasevero&lt;/a>)&lt;/li>
&lt;li>Maciej Szulik (&lt;a href="https://github.com/soltysh">soltysh&lt;/a>)&lt;/li>
&lt;li>Mengjiao Liu (&lt;a href="https://github.com/mengjiao-liu">mengjiao-liu&lt;/a>)&lt;/li>
&lt;li>Naman Lakhwani (&lt;a href="https://github.com/Namanl2001">Namanl2001&lt;/a>)&lt;/li>
&lt;li>Oksana Baranova (&lt;a href="https://github.com/oxxenix">oxxenix&lt;/a>)&lt;/li>
&lt;li>Patrick Ohly (&lt;a href="https://github.com/pohly">pohly&lt;/a>)&lt;/li>
&lt;li>songxiao-wang87 (&lt;a href="https://github.com/songxiao-wang87">songxiao-wang87&lt;/a>)&lt;/li>
&lt;li>Tim Allclai (&lt;a href="https://github.com/tallclair">tallclair&lt;/a>)&lt;/li>
&lt;li>ZhangYu (&lt;a href="https://github.com/Octopusjust">Octopusjust&lt;/a>)&lt;/li>
&lt;li>Ziqi Zhao (&lt;a href="https://github.com/fatsheep9146">fatsheep9146&lt;/a>)&lt;/li>
&lt;li>Zac (&lt;a href="https://github.com/249043822">249043822&lt;/a>)&lt;/li>
&lt;/ul></description></item><item><title>Blog: Spotlight on SIG Testing</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/11/24/sig-testing-spotlight-2023/</link><pubDate>Fri, 24 Nov 2023 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/11/24/sig-testing-spotlight-2023/</guid><description>
&lt;p>Welcome to another edition of the &lt;em>SIG spotlight&lt;/em> blog series, where we
highlight the incredible work being done by various Special Interest
Groups (SIGs) within the Kubernetes project. In this edition, we turn
our attention to &lt;a href="https://github.com/kubernetes/community/tree/master/sig-testing#readme">SIG Testing&lt;/a>,
a group interested in effective testing of Kubernetes and automating
away project toil. SIG Testing focus on creating and running tools and
infrastructure that make it easier for the community to write and run
tests, and to contribute, analyze and act upon test results.&lt;/p>
&lt;p>To gain some insights into SIG Testing, &lt;a href="https://github.com/sandipanpanda">Sandipan
Panda&lt;/a> spoke with &lt;a href="https://github.com/michelle192837">Michelle Shepardson&lt;/a>,
a senior software engineer at Google and a chair of SIG Testing, and
&lt;a href="https://github.com/pohly">Patrick Ohly&lt;/a>, a software engineer and architect at
Intel and a SIG Testing Tech Lead.&lt;/p>
&lt;h2 id="meet-the-contributors">Meet the contributors&lt;/h2>
&lt;p>&lt;strong>Sandipan:&lt;/strong> Could you tell us a bit about yourself, your role, and
how you got involved in the Kubernetes project and SIG Testing?&lt;/p>
&lt;p>&lt;strong>Michelle:&lt;/strong> Hi! I&amp;rsquo;m Michelle, a senior software engineer at
Google. I first got involved in Kubernetes through working on tooling
for SIG Testing, like the external instance of TestGrid. I&amp;rsquo;m part of
oncall for TestGrid and Prow, and am now a chair for the SIG.&lt;/p>
&lt;p>&lt;strong>Patrick:&lt;/strong> Hello! I work as a software engineer and architect in a
team at Intel which focuses on open source Cloud Native projects. When
I ramped up on Kubernetes to develop a storage driver, my very first
question was &amp;ldquo;how do I test it in a cluster and how do I log
information?&amp;rdquo; That interest led to various enhancement proposals until
I had (re)written enough code that also took over official roles as
SIG Testing Tech Lead (for the &lt;a href="https://github.com/kubernetes-sigs/e2e-framework">E2E framework&lt;/a>) and
structured logging WG lead.&lt;/p>
&lt;h2 id="testing-practices-and-tools">Testing practices and tools&lt;/h2>
&lt;p>&lt;strong>Sandipan:&lt;/strong> Testing is a field in which multiple approaches and
tools exist; how did you arrive at the existing practices?&lt;/p>
&lt;p>&lt;strong>Patrick:&lt;/strong> I can’t speak about the early days because I wasn’t
around yet 😆, but looking back at some of the commit history it’s
pretty obvious that developers just took what was available and
started using it. For E2E testing, that was
&lt;a href="https://github.com/onsi/ginkgo">Ginkgo+Gomega&lt;/a>. Some hacks were
necessary, for example around cleanup after a test run and for
categorising tests. Eventually this led to Ginkgo v2 and &lt;a href="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/04/12/e2e-testing-best-practices-reloaded/">revised best
practices for E2E testing&lt;/a>.
Regarding unit testing opinions are pretty diverse: some maintainers
prefer to use just the Go standard library with hand-written
checks. Others use helper packages like stretchr/testify. That
diversity is okay because unit tests are self-contained - contributors
just have to be flexible when working on many different areas.
Integration testing falls somewhere in the middle. It’s based on Go
unit tests, but needs complex helper packages to bring up an apiserver
and other components, then runs tests that are more like E2E tests.&lt;/p>
&lt;h2 id="subprojects-owned-by-sig-testing">Subprojects owned by SIG Testing&lt;/h2>
&lt;p>&lt;strong>Sandipan:&lt;/strong> SIG Testing is pretty diverse. Can you give a brief
overview of the various subprojects owned by SIG Testing?&lt;/p>
&lt;p>&lt;strong>Michelle:&lt;/strong> Broadly, we have subprojects related to testing
frameworks, and infrastructure, though they definitely overlap. So
for the former, there&amp;rsquo;s
&lt;a href="https://pkg.go.dev/sigs.k8s.io/e2e-framework">e2e-framework&lt;/a> (used
externally),
&lt;a href="https://pkg.go.dev/k8s.io/kubernetes/test/e2e/framework">test/e2e/framework&lt;/a>
(used for Kubernetes itself) and kubetest2 for end-to-end testing,
as well as boskos (resource rental for e2e tests),
&lt;a href="https://kind.sigs.k8s.io/">KIND&lt;/a> (Kubernetes-in-Docker, for local
testing and development), and the cloud provider for KIND. For the
latter, there&amp;rsquo;s &lt;a href="https://docs.prow.k8s.io/">Prow&lt;/a> (K8s-based CI/CD and
chatops), and a litany of other tools and utilities for triage,
analysis, coverage, Prow/TestGrid config generation, and more in the
test-infra repo.&lt;/p>
&lt;p>&lt;em>If you are willing to learn more and get involved with any of the SIG
Testing subprojects, check out the &lt;a href="https://github.com/kubernetes/community/tree/master/sig-testing#subprojects">SIG Testing README&lt;/a>.&lt;/em>&lt;/p>
&lt;h2 id="key-challenges-and-accomplishments">Key challenges and accomplishments&lt;/h2>
&lt;p>&lt;strong>Sandipan:&lt;/strong> What are some of the key challenges you face?&lt;/p>
&lt;p>&lt;strong>Michelle:&lt;/strong> Kubernetes is a gigantic project in every aspect, from
contributors to code to users and more. Testing and infrastructure
have to meet that scale, keeping up with every change from every repo
under Kubernetes while facilitating developing, improving, and
releasing the project as much as possible, though of course, we&amp;rsquo;re not
the only SIG involved in that. I think another other challenge is
staffing subprojects. SIG Testing has a number of subprojects that
have existed for years, but many of the original maintainers for them
have moved on to other areas or no longer have the time to maintain
them. We need to grow long-term expertise and owners in those
subprojects.&lt;/p>
&lt;p>&lt;strong>Patrick:&lt;/strong> As Michelle said, the sheer size can be a challenge. It’s
not just the infrastructure, also our processes must scale with the
number of contributors. It’s good to document best practices, but not
good enough: we have many new contributors, which is good, but having
reviewers explain best practices doesn’t scale - assuming that the
reviewers even know about them! It also doesn’t help that existing
code cannot get updated immediately because there is so much of it, in
particular for E2E testing. The initiative to &lt;a href="https://groups.google.com/a/kubernetes.io/g/dev/c/myGiml72IbM/m/QdO5bgQiAQAJ">apply stricter linting to new or modified code&lt;/a>
while accepting that existing code doesn’t pass those same linter
checks helps a bit.&lt;/p>
&lt;p>&lt;strong>Sandipan:&lt;/strong> Any SIG accomplishments that you are proud of and would
like to highlight?&lt;/p>
&lt;p>&lt;strong>Patrick:&lt;/strong> I am biased because I have been driving this, but I think
that the &lt;a href="https://github.com/kubernetes-sigs/e2e-framework">E2E framework&lt;/a> and linting are now in a much better shape than
they used to be. We may soon be able to run integration tests with
race detection enabled, which is important because we currently only
have that for unit tests and those tend to be less complex.&lt;/p>
&lt;p>&lt;strong>Sandipan:&lt;/strong> Testing is always important, but is there anything
specific to your work in terms of the Kubernetes release process?&lt;/p>
&lt;p>&lt;strong>Patrick:&lt;/strong> &lt;a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-testing/flaky-tests.md">test flakes&lt;/a>…
if we have too many of those, development velocity goes down because
PRs cannot be merged without clean test runs and those become less
likely. Developers also lose trust in testing and just &amp;ldquo;retest&amp;rdquo; until
they have a clean run, without checking whether failures might indeed
be related to a regression in their current change.&lt;/p>
&lt;h2 id="the-people-and-the-scope">The people and the scope&lt;/h2>
&lt;p>&lt;strong>Sandipan:&lt;/strong> What are some of your favourite things about this SIG?&lt;/p>
&lt;p>&lt;strong>Michelle:&lt;/strong> The people, of course 🙂. Aside from that, I like the
broad scope SIG Testing has. I feel like even small changes can make a
big difference for fellow contributors, and even if my interests
change over time, I&amp;rsquo;ll never run out of projects to work on.&lt;/p>
&lt;p>&lt;strong>Patrick:&lt;/strong> I can work on things that make my life and the life of my
fellow developers better, like the tooling that we have to use every
day while working on some new feature elsewhere.&lt;/p>
&lt;p>&lt;strong>Sandipan:&lt;/strong> Are there any funny / cool / TIL anecdotes that you
could tell us?&lt;/p>
&lt;p>&lt;strong>Patrick:&lt;/strong> I started working on E2E framework enhancements five
years ago, then was less active there for a while. When I came back
and wanted to test some new enhancement, I asked about how to write
unit tests for the new code and was pointed to some existing tests
which looked vaguely familiar, as if I had &lt;em>seen&lt;/em> them before. I
looked at the commit history and found that I had &lt;em>written&lt;/em> them! I’ll
let you decide whether that says something about my failing long-term
memory or simply is normal… Anyway, folks, remember to write good
commit messages and comments; someone will need them at some point -
it might even be yourself!&lt;/p>
&lt;h2 id="looking-ahead">Looking ahead&lt;/h2>
&lt;p>&lt;strong>Sandipan:&lt;/strong> What areas and/or subprojects does your SIG need help with?&lt;/p>
&lt;p>&lt;strong>Michelle:&lt;/strong> Some subprojects aren&amp;rsquo;t staffed at the moment and could
use folks willing to learn more about
them. &lt;a href="https://github.com/kubernetes-sigs/boskos#boskos">boskos&lt;/a> and
&lt;a href="https://github.com/kubernetes-sigs/kubetest2#kubetest2">kubetest2&lt;/a>
especially stand out to me, since both are important for testing but
lack dedicated owners.&lt;/p>
&lt;p>&lt;strong>Sandipan:&lt;/strong> Are there any useful skills that new contributors to SIG
Testing can bring to the table? What are some things that people can
do to help this SIG if they come from a background that isn’t directly
linked to programming?&lt;/p>
&lt;p>&lt;strong>Michelle:&lt;/strong> I think user empathy, writing clear feedback, and
recognizing patterns are really useful. Someone who uses the test
framework or tooling and can outline pain points with clear examples,
or who can recognize a wider issue in the project and pull data to
inform solutions for it.&lt;/p>
&lt;p>&lt;strong>Sandipan:&lt;/strong> What’s next for SIG Testing?&lt;/p>
&lt;p>&lt;strong>Patrick:&lt;/strong> Stricter linting will soon become mandatory for new
code. There are several E2E framework sub-packages that could be
modernised, if someone wants to take on that work. I also see an
opportunity to unify some of our helper code for E2E and integration
testing, but that needs more thought and discussion.&lt;/p>
&lt;p>&lt;strong>Michelle:&lt;/strong> I&amp;rsquo;m looking forward to making some usability
improvements for some of our tools and infra, and to supporting more
long-term contributions and growth of contributors into long-term
roles within the SIG. If you&amp;rsquo;re interested, hit us up!&lt;/p>
&lt;p>Looking ahead, SIG Testing has exciting plans in store. You can get in
touch with the folks at SIG Testing in their &lt;a href="https://kubernetes.slack.com/messages/sig-testing">Slack channel&lt;/a> or attend
one of their regular &lt;a href="https://github.com/kubernetes/community/tree/master/sig-testing#meetings">bi-weekly meetings on Tuesdays&lt;/a>. If
you are interested in making it easier for the community to run tests
and contribute test results, to ensure Kubernetes is stable across a
variety of cluster configurations and cloud providers, join the SIG
Testing community today!&lt;/p></description></item><item><title>Blog: Kubernetes Contributor Summit: Behind-the-scenes</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/11/03/k8s-contributor-summit-behind-the-scenes/</link><pubDate>Fri, 03 Nov 2023 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/11/03/k8s-contributor-summit-behind-the-scenes/</guid><description>
&lt;p>Every year, just before the official start of KubeCon+CloudNativeCon, there&amp;rsquo;s a special event that
has a very special place in the hearts of those organizing and participating in it: the Kubernetes
Contributor Summit. To find out why, and to provide a behind-the-scenes perspective, we interview
Noah Abrahams, whom amongst other roles was the co-lead for the Kubernetes Contributor Summit in
2023.&lt;/p>
&lt;p>&lt;strong>Frederico Muñoz (FSM)&lt;/strong>: Hello Noah, and welcome. Could you start by introducing yourself and
telling us how you got involved in Kubernetes?&lt;/p>
&lt;p>&lt;strong>Noah Abrahams (NA)&lt;/strong>: I’ve been in this space for quite a while.  I got started in IT in the mid
90&amp;rsquo;s, and I’ve been working in the &amp;ldquo;Cloud&amp;rdquo; space for about 15 years.  It was, frankly, through a
combination of sheer luck (being in the right place at the right time) and having good mentors to
pull me into those places (thanks, Tim!), that I ended up at a startup called Apprenda in 2016.
While I was there, they pivoted into Kubernetes, and it was the best thing that could have happened
to my career.  It was around v1.2 and someone asked me if I could give a presentation on Kubernetes
concepts at &amp;ldquo;my local meetup&amp;rdquo; in Las Vegas.  The meetup didn’t exist yet, so I created it, and got
involved in the wider community.  One thing led to another, and soon I was involved in ContribEx,
joined the release team, was doing booth duty for the CNCF, became an ambassador, and here we are
today.&lt;/p>
&lt;h2 id="the-contributor-summit">The Contributor Summit&lt;/h2>
&lt;p>&lt;img alt="KCSEU 2023 group photo" src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/11/03/k8s-contributor-summit-behind-the-scenes/kcseu2023-group.jpg">&lt;/p>
&lt;p>&lt;strong>FM&lt;/strong>: Before leading the organisation of the KCSEU 2023, how many other Contributor Summits were
you a part of?&lt;/p>
&lt;p>&lt;strong>NA&lt;/strong>: I was involved in four or five before taking the lead. If I&amp;rsquo;m recalling correctly, I
attended the summit in Copenhagen, then sometime in 2018 I joined the wrong meeting, because the
summit staff meeting was listed on the ContribEx calendar. Instead of dropping out of the call, I
listened a bit, then volunteered to take on some work that didn&amp;rsquo;t look like it had anybody yet
dedicated to it. I ended up running Ops in Seattle and helping run the New Contributor Workshop in
Shanghai, that year. Since then, I’ve been involved in all but two, since I missed both Barcelona
and Valencia.&lt;/p>
&lt;p>&lt;strong>FM&lt;/strong>: Have you noticed any major changes in terms of how the conference is organized throughout
the years? Namely in terms of number of participants, venues, speakers, themes&amp;hellip;&lt;/p>
&lt;p>&lt;strong>NA&lt;/strong>: The summit changes over the years with the ebb and flow of the desires of the contributors
that attend. While we can typically expect about the same number of attendees, depending on the
region that the event is held in, we adapt the style and content greatly based on the feedback that
we receive at the end of each event. Some years, contributors ask for more free-style or
unconference type sessions, and we plan on having more of those, but some years, people ask for more
planned sessions or workshops, so that&amp;rsquo;s what we facilitate. We also have to continually adapt to
the venue that we have, the number of rooms we&amp;rsquo;re allotted, how we&amp;rsquo;re going to share the space with
other events and so forth. That all goes into the planning ahead of time, from how many talk tracks
we’ll have, to what types of tables and how many microphones we want in a room.&lt;/p>
&lt;p>There has been one very significant change over the years, though, and that is that we no longer run
the New Contributor Workshop. While the content was valuable, running the session during the summit
never led to any people who weren’t already contributing to the project becoming dedicated
contributors to the project, so we removed it from the schedule. We&amp;rsquo;ll deliver that content another
way, while we’ll keep the summit focused on existing contributors.&lt;/p>
&lt;h2 id="what-makes-it-special">What makes it special&lt;/h2>
&lt;p>&lt;strong>FM&lt;/strong>: Going back to the introduction I made, I’ve heard several participants saying that KubeCon
is great, but that the Contributor Summit is for them the main event. In your opinion, why do you
think that makes it so?&lt;/p>
&lt;p>&lt;strong>NA&lt;/strong>: I think part of it ties into what I mentioned a moment ago, the flexibility in our content
types. For many contributors, I think the summit is basically &amp;ldquo;How Kubecon used to be&amp;rdquo;, back when
it was primarily a gathering of the contributors to talk about the health of the project and the
work that needed to be done. So, in that context, if the contributors want to discuss, say, a new
Working Group, then they have dedicated space to do so in the summit. They also have the space to
sit down and hack on a tough problem, discuss architectural philosophy, bring potential problems to
more people’s attention, refine our methods, and so forth. Plus, the unconference aspect allows for
some malleability on the day-of, for whatever is most important right then and there. Whatever
folks want to get out of this environment is what we’ll provide, and having a space and time
specifically to address your particular needs is always going to be well received.&lt;/p>
&lt;p>Let&amp;rsquo;s not forget the social aspect, too. Despite the fact that we&amp;rsquo;re a global community and work
together remotely and asynchronously, it&amp;rsquo;s still easier to work together when you have a personal
connection, and can put a face to a Github handle. Zoom meetings are a good start, but even a
single instance of in-person time makes a big difference in how people work together. So, getting
folks together a couple times a year makes the project run more smoothly.&lt;/p>
&lt;h2 id="organizing-the-summit">Organizing the Summit&lt;/h2>
&lt;p>&lt;strong>FM&lt;/strong>: In terms of the organization team itself, could you share with us a general overview of the
staffing process? Who are the people that make it happen? How many different teams are involved?&lt;/p>
&lt;p>&lt;strong>NA&lt;/strong>: There&amp;rsquo;s a bit of the &amp;ldquo;usual suspects&amp;rdquo; involved in making this happen, many of whom you&amp;rsquo;ll
find in the ContribEx meetings, but really it comes down to whoever is going to step up and do the
work. We start with a general call out for volunteers from the org. There&amp;rsquo;s a Github issue where
we&amp;rsquo;ll track the staffing and that will get shouted out to all the usual comms channels: slack,
k-dev, etc.&lt;/p>
&lt;p>From there, there&amp;rsquo;s a handful of different teams, overseeing content/program committee,
registration, communications, day-of operations, the awards the SIGs present to their members, the
after-summit social event, and so on. The leads for each team/role are generally picked from folks
who have stepped up and worked the event before, either as a shadow, or a previous lead, so we know
we can rely on them, which is a recurring theme. The leads pick their shadows from whoever pipes up
on the issue, and the teams move forward, operating according to their role books, which we try to
update at the end of each summit, with what we&amp;rsquo;ve learned over the past few months. It&amp;rsquo;s expected
that a shadow will be in line to lead that role at some point in a future summit, so we always have
a good bench of folks available to make this event happen. A couple of the roles also have some
non-shadow volunteers where people can step in to help a bit, like as an on-site room monitor, and
get a feel for how things are put together without having to give a serious up-front commitment, but
most of the folks working the event are dedicated to both making the summit successful, and coming
back to do so in the future. Of course, the roster can change over time, or even suddenly, as
people gain or lose travel budget, get new jobs, only attend Europe or North America or Asia, etc.
It&amp;rsquo;s a constant dance, relying 100% on the people who want to make this project successful.&lt;/p>
&lt;p>Last, but not least, is the Summit lead. They have to keep the entire process moving forward, be
willing to step in to keep bike-shedding from derailing our deadlines, make sure the right people
are talking to one another, lead all our meetings to make sure everyone gets a voice, etc. In some
cases, the lead has to even be willing to take over an entirely separate role, in case someone gets
sick or has any other extenuating circumstances, to make sure absolutely nothing falls through the
cracks. The lead is only allowed to volunteer after they’ve been through this a few times and know
what the event entails. Event planning is not for the faint of heart.&lt;/p>
&lt;p>&lt;strong>FM&lt;/strong>: The participation of volunteers is essential, but there&amp;rsquo;s also the topic of CNCF support:
how does this dynamic play out in practice?&lt;/p>
&lt;p>&lt;strong>NA&lt;/strong>: This event would not happen in its current form without our CNCF liaison. They provide us
with space, make sure we are fed and caffeinated and cared for, bring us outside spaces to evaluate,
so we have somewhere to hold the social gathering, get us the budget so we have t-shirts and patches
and the like, and generally make it possible for us to put this event together. They&amp;rsquo;re even
responsible for the signage and arrows, so the attendees know where to go. They&amp;rsquo;re the ones sitting
at the front desk, keeping an eye on everything and answering people&amp;rsquo;s questions. At the same time,
they&amp;rsquo;re along to facilitate, and try to avoid influencing our planning.&lt;/p>
&lt;p>There&amp;rsquo;s a ton of work that goes into making the summit happen that is easy to overlook, as an
attendee, because people tend to expect things to just work. It is not exaggerating to say this
event would not have happened like it has over the years, without the help from our liaisons, like
Brienne and Deb. They are an integral part of the team.&lt;/p>
&lt;h2 id="a-look-ahead">A look ahead&lt;/h2>
&lt;p>&lt;strong>FM&lt;/strong>: Currently, we’re preparing the NA 2023 summit, how is it going? Any changes in format
compared with previous ones?&lt;/p>
&lt;p>&lt;strong>NA&lt;/strong>: I would say it&amp;rsquo;s going great, though I&amp;rsquo;m sort of emeritus lead for this event, mostly
picking up the things that I see need to be done and don&amp;rsquo;t have someone assigned to it. We&amp;rsquo;re
always learning from our past experiences and making small changes to continually be better, from
how many people need to be on a particular rotation to how far in advance we open and close the CFP.
There&amp;rsquo;s no major changes right now, just continually providing the content that the contributors
want.&lt;/p>
&lt;p>&lt;strong>FM&lt;/strong>: For our readers that might be interested in joining in the Kubernetes Contributor Summit, is
there anything they should know?&lt;/p>
&lt;p>&lt;strong>NA&lt;/strong>: First of all, the summit is an event by and for Org members. If you&amp;rsquo;re not already an org
member, you should be getting involved before trying to attend the summit, as the content is curated
specifically towards the contributors and maintainers of the project. That applies to the staff, as
well, as all the decisions should be made with the interests and health of kubernetes contributors
being the end goal. We get a lot of people who show interest in helping out, but then aren&amp;rsquo;t ready
to make any sort of commitment, and that just makes more work for us. If you&amp;rsquo;re not already a
proven and committed member of this community, it’s difficult for us to place you in a position that
requires reliability. We have made some rare exceptions when we need someone local to help us out,
but those are few and far between.&lt;/p>
&lt;p>If you are, however, already a member, we&amp;rsquo;d love to have you. The more people that are involved,
the better the event becomes. That applies to both dedicated staff, and those in attendance
bringing CFPs, unconference topics, and just contributing to the discussions. If you&amp;rsquo;re part of
this community and you&amp;rsquo;re going to be at KubeCon, I would highly urge you to attend, and if you&amp;rsquo;re
not yet an org member, let&amp;rsquo;s make that happen!&lt;/p>
&lt;p>&lt;strong>FM&lt;/strong>: Indeed! Any final comments you would like to share?&lt;/p>
&lt;p>&lt;strong>NA&lt;/strong>: Just that the Contributor Summit is, for me, the ultimate manifestation of the Hallway
Track. By being here, you&amp;rsquo;re part of the conversations that move this project forward. It&amp;rsquo;s good
for you, and it&amp;rsquo;s good for Kubernetes. I hope to see you all in Chicago!&lt;/p></description></item><item><title>Blog: Spotlight on SIG Architecture: Production Readiness</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/11/02/sig-architecture-production-readiness-spotlight-2023/</link><pubDate>Thu, 02 Nov 2023 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/11/02/sig-architecture-production-readiness-spotlight-2023/</guid><description>
&lt;p>&lt;em>This is the second interview of a SIG Architecture Spotlight series that will cover the different
subprojects. In this blog, we will cover the &lt;a href="https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#production-readiness-1">SIG Architecture: Production Readiness
subproject&lt;/a>&lt;/em>.&lt;/p>
&lt;p>In this SIG Architecture spotlight, we talked with &lt;a href="https://github.com/wojtek-t">Wojciech Tyczynski&lt;/a>
(Google), lead of the Production Readiness subproject.&lt;/p>
&lt;h2 id="about-sig-architecture-and-the-production-readiness-subproject">About SIG Architecture and the Production Readiness subproject&lt;/h2>
&lt;p>&lt;strong>Frederico (FSM)&lt;/strong>: Hello Wojciech, could you tell us a bit about yourself, your role and how you
got involved in Kubernetes?&lt;/p>
&lt;p>&lt;strong>Wojciech Tyczynski (WT)&lt;/strong>: I started contributing to Kubernetes in January 2015. At that time,
Google (where I was and still am working) decided to start a Kubernetes team in the Warsaw office
(in addition to already existing teams in California and Seattle). I was lucky enough to be one of
the seeding engineers for that team.&lt;/p>
&lt;p>After two months of onboarding and helping with different tasks across the project towards 1.0
launch, I took ownership of the scalability area and I was leading Kubernetes to support clusters
with 5000 nodes. I’m still involved in &lt;a href="https://github.com/kubernetes/community/blob/master/sig-scalability/README.md">SIG Scalability&lt;/a>
as its Technical Lead. That was the start of a journey since scalability is such a cross-cutting topic,
and I started contributing to many other areas including, over time, to SIG Architecture.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: In SIG Architecture, why specifically the Production Readiness subproject? Was it something
you had in mind from the start, or was it an unexpected consequence of your initial involvement in
scalability?&lt;/p>
&lt;p>&lt;strong>WT&lt;/strong>: After reaching that milestone of &lt;a href="https://kubernetes.io/blog/2017/03/scalability-updates-in-kubernetes-1-6/">Kubernetes supporting 5000-node clusters&lt;/a>,
one of the goals was to ensure that Kubernetes would not degrade its scalability properties over time. While
non-scalable implementation is always fixable, designing non-scalable APIs or contracts is
problematic. I was looking for a way to ensure that people are thinking about
scalability when they create new features and capabilities without introducing too much overhead.&lt;/p>
&lt;p>This is when I joined forces with &lt;a href="https://github.com/johnbelamaric">John Belamaric&lt;/a> and
&lt;a href="https://github.com/deads2k">David Eads&lt;/a> and created a Production Readiness subproject within SIG
Architecture. While setting the bar for scalability was only one of a few motivations for it, it
ended up fitting quite well. At the same time, I was already involved in the overall reliability of
the system internally, so other goals of Production Readiness were also close to my heart.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: To anyone new to how SIG Architecture works, how would you describe the main goals and
areas of intervention of the Production Readiness subproject?&lt;/p>
&lt;p>&lt;strong>WT&lt;/strong>: The goal of the Production Readiness subproject is to ensure that any feature that is added
to Kubernetes can be reliably used in production clusters. This primarily means that those features
are observable, scalable, supportable, can always be safely enabled and in case of production issues
also disabled.&lt;/p>
&lt;h2 id="production-readiness-and-the-kubernetes-project">Production readiness and the Kubernetes project&lt;/h2>
&lt;p>&lt;strong>FSM&lt;/strong>: Architectural consistency being one of the goals of the SIG, is this made more challenging
by the &lt;a href="https://www.cncf.io/reports/kubernetes-project-journey-report/">distributed and open nature of Kubernetes&lt;/a>?
Do you feel this impacts the approach that Production Readiness has to take?&lt;/p>
&lt;p>&lt;strong>WT&lt;/strong>: The distributed nature of Kubernetes certainly impacts Production Readiness, because it
makes thinking about aspects like enablement/disablement or scalability more challenging. To be more
precise, when enabling or disabling features that span multiple components you need to think about
version skew between them and design for it. For scalability, changes in one component may actually
result in problems for a completely different one, so it requires a good understanding of the whole
system, not just individual components. But it’s also what makes this project so interesting.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: Those running Kubernetes in production will have their own perspective on things, how do
you capture this feedback?&lt;/p>
&lt;p>&lt;strong>WT&lt;/strong>: Fortunately, we aren’t talking about &lt;em>&amp;ldquo;them&amp;rdquo;&lt;/em> here, we’re talking about &lt;em>&amp;ldquo;us&amp;rdquo;&lt;/em>: all of us are
working for companies that are managing large fleets of Kubernetes clusters and we’re involved in
that too, so we suffer from those problems ourselves.&lt;/p>
&lt;p>So while we’re trying to get feedback (our annual PRR survey is very important for us), it rarely
reveals completely new problems - it rather shows the scale of them. And we try to react to it -
changes like &amp;ldquo;Beta APIs off by default&amp;rdquo; happen in reaction to the data that we observe.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: On the topic of reaction, that made me think of how the &lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/NNNN-kep-template/README.md">Kubernetes Enhancement Proposal (KEP)&lt;/a>
template has a Production Readiness Review (PRR) section, which is tied to the graduation
process. Was this something born out of identified insufficiencies? How would you describe the
results?&lt;/p>
&lt;p>&lt;strong>WT&lt;/strong>: As mentioned above, the overall goal of the Production Readiness subproject is to ensure
that every newly added feature can be reliably used in production. It’s not possible to enforce that
by a central team - we need to make it everyone&amp;rsquo;s problem.&lt;/p>
&lt;p>To achieve it, we wanted to ensure that everyone designing their new feature is thinking about safe
enablement, scalability, observability, supportability, etc. from the very beginning. Which means
not when the implementation starts, but rather during the design. Given that KEPs are effectively
Kubernetes design docs, making it part of the KEP template was the way to achieve the goal.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: So, in a way making sure that feature owners have thought about the implications of their
proposal.&lt;/p>
&lt;p>&lt;strong>WT&lt;/strong>: Exactly. We already observed that just by forcing feature owners to think through the PRR
aspects (via forcing them to fill in the PRR questionnaire) many of the original issues are going
away. Sure - as PRR approvers we’re still catching gaps, but even the initial versions of KEPs are
better now than they used to be a couple of years ago in what concerns thinking about
productionisation aspects, which is exactly what we wanted to achieve - spreading the culture of
thinking about reliability in its widest possible meaning.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: We&amp;rsquo;ve been talking about the PRR process, could you describe it for our readers?&lt;/p>
&lt;p>&lt;strong>WT&lt;/strong>: The &lt;a href="https://github.com/kubernetes/community/blob/master/sig-architecture/production-readiness.md">PRR process&lt;/a>
is fairly simple - we just want to ensure that you think through the productionisation aspects of
your feature early enough. If you do your job, it’s just a matter of answering some questions in the
KEP template and getting approval from a PRR approver (in addition to regular SIG approval). If you
didn’t think about those aspects earlier, it may require spending more time and potentially revising
some decisions, but that’s exactly what we need to make the Kubernetes project reliable.&lt;/p>
&lt;h2 id="helping-with-production-readiness">Helping with Production Readiness&lt;/h2>
&lt;p>&lt;strong>FSM&lt;/strong>: Production Readiness seems to be one area where a good deal of prior exposure is required
in order to be an effective contributor. Are there also ways for someone newer to the project to
contribute?&lt;/p>
&lt;p>&lt;strong>WT&lt;/strong>: PRR approvers have to have a deep understanding of the whole Kubernetes project to catch
potential issues. Kubernetes is such a large project now with so many nuances that people who are
new to the project can simply miss the context, no matter how senior they are.&lt;/p>
&lt;p>That said, there are many ways that you may implicitly help. Increasing the reliability of
particular areas of the project by improving its observability and debuggability, increasing test
coverage, and building new kinds of tests (upgrade, downgrade, chaos, etc.) will help us a lot. Note
that the PRR subproject is focused on keeping the bar at the design level, but we should also care
equally about the implementation. For that, we’re relying on individual SIGs and code approvers, so
having people there who are aware of productionisation aspects, and who deeply care about it, will
help the project a lot.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: Thank you! Any final comments you would like to share with our readers?&lt;/p>
&lt;p>&lt;strong>WT&lt;/strong>: I would like to highlight and thank all contributors for their cooperation. While the PRR
adds some additional work for them, we see that people care about it, and what’s even more
encouraging is that with every release the quality of the answers improves, and questions &amp;ldquo;do I
really need a metric reflecting if my feature works&amp;rdquo; or &amp;ldquo;is downgrade really that important&amp;rdquo; don’t
really appear anymore.&lt;/p></description></item><item><title>Blog: A Quick Recap of 2023 China Kubernetes Contributor Summit</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/10/20/kcs-shanghai/</link><pubDate>Fri, 20 Oct 2023 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/10/20/kcs-shanghai/</guid><description>
&lt;p>On September 26, 2023, the first day of
&lt;a href="https://www.lfasiallc.com/kubecon-cloudnativecon-open-source-summit-china/">KubeCon + CloudNativeCon + Open Source Summit China 2023&lt;/a>,
nearly 50 contributors gathered in Shanghai for the Kubernetes Contributor Summit.&lt;/p>
&lt;figure>
&lt;img src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/10/20/kcs-shanghai/kcs04.jpeg"
alt="Kubernetes contributors posing for a group photo"/> &lt;figcaption>
&lt;p>All participants in the 2023 Kubernetes Contributor Summit&lt;/p>
&lt;/figcaption>
&lt;/figure>
&lt;p>This marked the first in-person offline gathering held in China after three years of the pandemic.&lt;/p>
&lt;h2 id="a-joyful-meetup">A joyful meetup&lt;/h2>
&lt;p>The event began with welcome speeches from &lt;a href="https://github.com/kevin-wangzefeng">Kevin Wang&lt;/a> from Huawei Cloud,
one of the co-chairs of KubeCon, and &lt;a href="https://github.com/puja108">Puja&lt;/a> from Giant Swarm.&lt;/p>
&lt;p>Following the opening remarks, the contributors introduced themselves briefly. Most attendees were from China,
while some contributors had made the journey from Europe and the United States specifically for the conference.
Technical experts from companies such as Microsoft, Intel, Huawei, as well as emerging forces like DaoCloud,
were present. Laughter and cheerful voices filled the room, regardless of whether English was spoken with
European or American accents or if conversations were carried out in authentic Chinese language. This created
an atmosphere of comfort, joy, respect, and anticipation. Past contributions brought everyone closer, and
mutual recognition and accomplishments made this offline gathering possible.&lt;/p>
&lt;figure>
&lt;img src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/10/20/kcs-shanghai/kcs06.jpeg"
alt="A group of Kubernetes contributors sat around circular tables"/> &lt;figcaption>
&lt;p>Face to face meeting in Shanghai&lt;/p>
&lt;/figcaption>
&lt;/figure>
&lt;p>The attending contributors were no longer just GitHub IDs; they transformed into vivid faces.
From sitting together and capturing group photos to attempting to identify &amp;ldquo;Who is who,&amp;rdquo;
a loosely connected collective emerged. This team structure, although loosely knit and free-spirited,
was established to pursue shared dreams.&lt;/p>
&lt;p>As the saying goes, &amp;ldquo;You reap what you sow.&amp;rdquo; Each effort has been diligently documented within
the Kubernetes community contributions. Regardless of the passage of time, the community will
not erase those shining traces. Brilliance can be found in your PRs, issues, or comments.
It can also be seen in the smiling faces captured in meetup photos or heard through stories passed down among contributors.&lt;/p>
&lt;h2 id="technical-sharing-and-discussions">Technical sharing and discussions&lt;/h2>
&lt;p>Next, there were three technical sharing sessions:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;a href="https://github.com/kubernetes/community/blob/master/sig-multicluster/README.md">sig-multi-cluster&lt;/a>:
&lt;a href="https://github.com/RainbowMango">Hongcai Ren&lt;/a>, a maintainer of Karmada, provided an introduction to
the responsibilities and roles of this SIG. Their focus is on designing, discussing, implementing,
and maintaining APIs, tools, and documentation related to multi-cluster management.
Cluster Federation, one of Karmada&amp;rsquo;s core concepts, is also part of their work.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;a href="https://github.com/helmfile/helmfile">helmfile&lt;/a>: &lt;a href="https://github.com/yxxhero">yxxhero&lt;/a>
from &lt;a href="https://gitlab.cn/">GitLab&lt;/a> presented how to deploy Kubernetes manifests declaratively,
customize configurations, and leverage the latest features of Helm, including Helmfile.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;a href="https://github.com/kubernetes/community/blob/master/sig-scheduling/README.md">sig-scheduling&lt;/a>:
&lt;a href="https://github.com/william-wang">william-wang&lt;/a> from Huawei Cloud shared the recent updates and
future plans of SIG Scheduling. This SIG is responsible for designing, developing, and testing
components related to Pod scheduling.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;figure>
&lt;img src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/10/20/kcs-shanghai/kcs03.jpeg"
alt="A group of contributors sat at tables, listening to a presenter speaking at a podium"/> &lt;figcaption>
&lt;p>A technical session about SIG Multicluster&lt;/p>
&lt;/figcaption>
&lt;/figure>
&lt;p>Following the sessions, a video featuring a call for contributors by &lt;a href="https://github.com/SergeyKanzhelev">Sergey Kanzhelev&lt;/a>,
the SIG-Node Chair, was played. The purpose was to encourage more contributors to join the Kubernetes community,
with a special emphasis on the popular SIG-Node.&lt;/p>
&lt;p>Lastly, Kevin hosted an Unconference collective discussion session covering topics such as
multi-cluster management, scheduling, elasticity, AI, and more. For detailed minutes of
the Unconference meeting, please refer to &lt;a href="https://docs.qq.com/doc/DY3pLWklzQkhjWHNT">https://docs.qq.com/doc/DY3pLWklzQkhjWHNT&lt;/a>.&lt;/p>
&lt;h2 id="chinas-contributor-statistics">China&amp;rsquo;s contributor statistics&lt;/h2>
&lt;p>The contributor summit took place in Shanghai, with 90% of the attendees being Chinese.
Within the Cloud Native Computing Foundation (CNCF) ecosystem, contributions from China have been steadily increasing. Currently:&lt;/p>
&lt;ul>
&lt;li>Chinese contributors account for 9% of the total.&lt;/li>
&lt;li>Contributions from China make up 11.7% of the overall volume.&lt;/li>
&lt;li>China ranks second globally in terms of contributions.&lt;/li>
&lt;/ul>
&lt;blockquote>
&lt;p>The data is from KubeCon keynotes by Chris Aniszczyk, CTO of Cloud Native Computing Foundation,
on September 26, 2023. This probably understates Chinese contributions. A lot of Chinese contributors
use VPNs and may not show up as being from China in the stats accurately.&lt;/p>
&lt;/blockquote>
&lt;p>The Kubernetes Contributor Summit is an inclusive meetup that welcomes all community contributors, including:&lt;/p>
&lt;ul>
&lt;li>New Contributors&lt;/li>
&lt;li>Current Contributors
&lt;ul>
&lt;li>docs&lt;/li>
&lt;li>code&lt;/li>
&lt;li>community management&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Subproject members&lt;/li>
&lt;li>Members of Special Interest Group (SIG) / Working Group (WG)&lt;/li>
&lt;li>Active Contributors&lt;/li>
&lt;li>Casual Contributors&lt;/li>
&lt;/ul>
&lt;h2 id="acknowledgments">Acknowledgments&lt;/h2>
&lt;p>We would like to express our gratitude to the organizers of this event:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://github.com/kevin-wangzefeng">Kevin Wang&lt;/a>, the co-chair of KubeCon and the lead of the kubernetes contributor summit.&lt;/li>
&lt;li>&lt;a href="https://github.com/pacoxu">Paco Xu&lt;/a>, who actively coordinated the venue, meals, invited contributors from both China and international sources,
and established WeChat groups to collect agenda topics. They also shared details of the event
before and after its occurrence through &lt;a href="https://github.com/kubernetes/community/issues/7510">pre and post announcements&lt;/a>.&lt;/li>
&lt;li>&lt;a href="https://github.com/mengjiao-liu">Mengjiao Liu&lt;/a>, who was responsible for organizing, coordinating,
and facilitating various matters related to the summit.&lt;/li>
&lt;/ul>
&lt;p>We extend our appreciation to all the contributors who attended the China Kubernetes Contributor Summit in Shanghai.
Your dedication and commitment to the Kubernetes community are invaluable.
Together, we continue to push the boundaries of cloud native technology and shape the future of this ecosystem.&lt;/p></description></item><item><title>Blog: Spotlight on SIG Architecture: Conformance</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/10/05/sig-architecture-conformance-spotlight-2023/</link><pubDate>Thu, 05 Oct 2023 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/10/05/sig-architecture-conformance-spotlight-2023/</guid><description>
&lt;p>&lt;em>This is the first interview of a SIG Architecture Spotlight series
that will cover the different subprojects. We start with the SIG
Architecture: Conformance subproject&lt;/em>&lt;/p>
&lt;p>In this &lt;a href="https://github.com/kubernetes/community/blob/master/sig-architecture/README.md">SIG
Architecture&lt;/a>
spotlight, we talked with &lt;a href="https://github.com/Riaankl">Riaan
Kleinhans&lt;/a> (ii-Team), Lead for the
&lt;a href="https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#conformance-definition-1">Conformance
sub-project&lt;/a>.&lt;/p>
&lt;h2 id="about-sig-architecture-and-the-conformance-subproject">About SIG Architecture and the Conformance subproject&lt;/h2>
&lt;p>&lt;strong>Frederico (FSM)&lt;/strong>: Hello Riaan, and welcome! For starters, tell us a
bit about yourself, your role and how you got involved in Kubernetes.&lt;/p>
&lt;p>&lt;strong>Riaan Kleinhans (RK)&lt;/strong>: Hi! My name is Riaan Kleinhans and I live in
South Africa. I am the Project manager for the &lt;a href="ii.nz">ii-Team&lt;/a> in New
Zealand. When I joined ii the plan was to move to New Zealand in April
2020 and then Covid happened. Fortunately, being a flexible and
dynamic team we were able to make it work remotely and in very
different time zones.&lt;/p>
&lt;p>The ii.nz team have been tasked with managing the Kubernetes
Conformance testing technical debt and writing tests to clear the
technical debt. I stepped into the role of project manager to be the
link between monitoring, test writing and the community. Through that
work I had the privilege of meeting the late &lt;a href="https://github.com/cncf/memorials/blob/main/dan-kohn.md">Dan
Kohn&lt;/a> in
those first months, his enthusiasm about the work we were doing was a
great inspiration.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: Thank you - so, your involvement in SIG Architecture started
because of the conformance work?&lt;/p>
&lt;p>&lt;strong>RK&lt;/strong>: SIG Architecture is the home for the Kubernetes Conformance
subproject. Initially, most of my interactions were directly with SIG
Architecture through the Conformance sub-project. However, as we
began organizing the work by SIG, we started engaging directly with
each individual SIG. These engagements with the SIGs that own the
untested APIs have helped us accelerate our work.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: How would you describe the main goals and
areas of intervention of the Conformance sub-project?&lt;/p>
&lt;p>&lt;strong>RM&lt;/strong>: The Kubernetes Conformance sub-project focuses on guaranteeing
compatibility and adherence to the Kubernetes specification by
developing and maintaining a comprehensive conformance test suite. Its
main goals include assuring compatibility across different Kubernetes
implementations, verifying adherence to the API specification,
supporting the ecosystem by encouraging conformance certification, and
fostering collaboration within the Kubernetes community. By providing
standardised tests and promoting consistent behaviour and
functionality, the Conformance subproject ensures a reliable and
compatible Kubernetes ecosystem for developers and users alike.&lt;/p>
&lt;h2 id="more-on-the-conformance-test-suite">More on the Conformance Test Suite&lt;/h2>
&lt;p>&lt;strong>FSM&lt;/strong>: A part of providing those standardised tests is, I believe,
the &lt;a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md">Conformance Test
Suite&lt;/a>. Could
you explain what it is and its importance?&lt;/p>
&lt;p>&lt;strong>RK&lt;/strong>: The Kubernetes Conformance Test Suite checks if Kubernetes
distributions meet the project&amp;rsquo;s specifications, ensuring
compatibility across different implementations. It covers various
features like APIs, networking, storage, scheduling, and
security. Passing the tests confirms proper implementation and
promotes a consistent and portable container orchestration platform.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: Right, the tests are important in the way they define the
minimum features that any Kubernetes cluster must support. Could you
describe the process around determining which features are considered
for inclusion? Is there any tension between a more minimal approach,
and proposals from the other SIGs?&lt;/p>
&lt;p>&lt;strong>RK&lt;/strong>: The requirements for each endpoint that undergoes conformance
testing are clearly defined by SIG Architecture. Only API endpoints
that are generally available and non-optional features are eligible
for conformance. Over the years, there have been several discussions
regarding conformance profiles, exploring the possibility of including
optional endpoints like RBAC, which are widely used by most end users,
in specific profiles. However, this aspect is still a work in
progress.&lt;/p>
&lt;p>Endpoints that do not meet the conformance criteria are listed in
&lt;a href="https://github.com/kubernetes/kubernetes/blob/master/test/conformance/testdata/ineligible_endpoints.yaml">ineligible_endpoints.yaml&lt;/a>,
which is publicly accessible in the Kubernetes repo. This file can be
updated to add or remove endpoints as their status or requirements
change. These ineligible endpoints are also visible on
&lt;a href="https://apisnoop.cncf.io/">APISnoop&lt;/a>.&lt;/p>
&lt;p>Ensuring transparency and incorporating community input regarding the
eligibility or ineligibility of endpoints is of utmost importance to
SIG Architecture.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: Writing tests for new features is something generally
requires some kind of enforcement. How do you see the evolution of
this in Kubernetes? Was there a specific effort to improve the process
in a way that required tests would be a first-class citizen, or was
that never an issue?&lt;/p>
&lt;p>&lt;strong>RK&lt;/strong>: When discussions surrounding the Kubernetes conformance
programme began in 2018, only approximately 11% of endpoints were
covered by tests. At that time, the CNCF&amp;rsquo;s governing board requested
that if funding were to be provided for the work to cover missing
conformance tests, the Kubernetes Community should adopt a policy of
not allowing new features to be added unless they include conformance
tests for their stable APIs.&lt;/p>
&lt;p>SIG Architecture is responsible for stewarding this requirement, and
&lt;a href="https://apisnoop.cncf.io/">APISnoop&lt;/a> has proven to be an invaluable
tool in this regard. Through automation, APISnoop generates a pull
request every weekend to highlight any discrepancies in Conformance
coverage. If any endpoints are promoted to General Availability
without a conformance test, it will be promptly identified. This
approach helps prevent the accumulation of new technical debt.&lt;/p>
&lt;p>Additionally, there are plans in the near future to create a release
informing job, which will add an additional layer to prevent any new
technical debt.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: I see, tooling and automation play an important role
there. What are, in your opinion, the areas that, conformance-wise,
still require some work to be done? In other words, what are the
current priority areas marked for improvement?&lt;/p>
&lt;p>&lt;strong>RK&lt;/strong>: We have reached the “100% Conformance Tested” milestone in
release 1.27!&lt;/p>
&lt;p>At that point, the community took another look at &lt;a href="https://github.com/kubernetes/kubernetes/blob/master/test/conformance/testdata/pending_eligible_endpoints.yaml">all the endpoints
that were listed as ineligible for
conformance&lt;/a>.
The list was populated through community input over several years.
Several endpoints that were previously deemed ineligible for
conformance have been identified and relocated to a new dedicated
list, which is currently receiving focused attention for conformance
test development. Again, that list can also be checked on
&lt;a href="apisnoop.cncf.io.">apisnoop.cncf.io&lt;/a>.&lt;/p>
&lt;p>To ensure the avoidance of new technical debt in the conformance
project, there are upcoming plans to establish a release informing job
as an additional preventive measure.&lt;/p>
&lt;p>While APISnoop is currently hosted on CNCF infrastructure, the project
has been generously donated to the Kubernetes community. Consequently,
it will be transferred to community-owned infrastructure before the
end of 2023.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: That&amp;rsquo;s great news! For anyone wanting to help, what are the
venues for collaboration that you would highlight? Do all of them
require solid knowledge of Kubernetes as a whole, or are there ways
someone newer to the project can contribute?&lt;/p>
&lt;p>&lt;strong>RK&lt;/strong>: Contributing to conformance testing is akin to the task of
&amp;ldquo;washing the dishes&amp;rdquo; – it may not be highly visible, but it remains
incredibly important. It necessitates a strong understanding of
Kubernetes, particularly in the areas where the endpoints need to be
tested. This is why working with each SIG that owns the API endpoint
being tested is so important.&lt;/p>
&lt;p>As part of our commitment to making test writing accessible to
everyone, the ii team is currently engaged in the development of a
&amp;ldquo;click and deploy&amp;rdquo; solution. This solution aims to enable anyone to
swiftly create a working environment on real hardware within
minutes. We will share updates regarding this development as soon as
we are ready.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: That&amp;rsquo;s very helpful, thank you. Any final comments you would
like to share with our readers?&lt;/p>
&lt;p>&lt;strong>RK&lt;/strong>: Conformance testing is a collaborative community endeavour that
involves extensive cooperation among SIGs. SIG Architecture has
spearheaded the initiative and provided guidance. However, the
progress of the work relies heavily on the support of all SIGs in
reviewing, enhancing, and endorsing the tests.&lt;/p>
&lt;p>I would like to extend my sincere appreciation to the ii team for
their unwavering commitment to resolving technical debt over the
years. In particular, &lt;a href="https://github.com/hh">Hippie Hacker&lt;/a>&amp;rsquo;s
guidance and stewardship of the vision has been
invaluable. Additionally, I want to give special recognition to
Stephen Heywood for shouldering the majority of the test writing
workload in recent releases, as well as to Zach Mandeville for his
contributions to APISnoop.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: Many thanks for your availability and insightful comments,
I&amp;rsquo;ve personally learned quite a bit with it and I&amp;rsquo;m sure our readers
will as well.&lt;/p></description></item><item><title>Blog: Announcing the 2023 Steering Committee Election Results</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/10/02/steering-committee-results-2023/</link><pubDate>Mon, 02 Oct 2023 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/10/02/steering-committee-results-2023/</guid><description>
&lt;p>The &lt;a href="https://github.com/kubernetes/community/tree/master/events/elections/2023">2023 Steering Committee Election&lt;/a> is now complete. The Kubernetes Steering Committee consists of 7 seats, 4 of which were up for election in 2023. Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community.&lt;/p>
&lt;p>This community body is significant since it oversees the governance of the entire Kubernetes project. With that great power comes great responsibility. You can learn more about the steering committee’s role in their &lt;a href="https://github.com/kubernetes/steering/blob/master/charter.md">charter&lt;/a>.&lt;/p>
&lt;p>Thank you to everyone who voted in the election; your participation helps support the community’s continued health and success.&lt;/p>
&lt;h2 id="results">Results&lt;/h2>
&lt;p>Congratulations to the elected committee members whose two year terms begin immediately (listed in alphabetical order by GitHub handle):&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Stephen Augustus (&lt;a href="https://github.com/justaugustus">@justaugustus&lt;/a>), Cisco&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Paco Xu 徐俊杰 (&lt;a href="https://github.com/pacoxu">@pacoxu&lt;/a>), DaoCloud&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Patrick Ohly (&lt;a href="https://github.com/pohly">@pohly&lt;/a>), Intel&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Maciej Szulik (&lt;a href="https://github.com/soltysh">@soltysh&lt;/a>), Red Hat&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>They join continuing members:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Benjamin Elder (&lt;a href="https://github.com/bentheelder">@bentheelder&lt;/a>), Google&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Bob Killen (&lt;a href="https://github.com/mrbobbytables">@mrbobbytables&lt;/a>), Google&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Nabarun Pal (&lt;a href="https://github.com/palnabarun">@palnabarun&lt;/a>, VMware&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>Stephen Augustus is a returning Steering Committee Member.&lt;/p>
&lt;h2 id="big-thanks">Big Thanks!&lt;/h2>
&lt;p>Thank you and congratulations on a successful election to this round’s election officers:&lt;/p>
&lt;ul>
&lt;li>Bridget Kromhout (&lt;a href="https://github.com/bridgetkromhout">@bridgetkromhout&lt;/a>)&lt;/li>
&lt;li>Davanum Srinavas (&lt;a href="https://github.com/dims">@dims&lt;/a>)&lt;/li>
&lt;li>Kaslin Fields (&lt;a href="https://github.com/kaslin">@kaslin&lt;/a>)&lt;/li>
&lt;/ul>
&lt;p>Thanks to the Emeritus Steering Committee Members. Your service is appreciated by the community:&lt;/p>
&lt;ul>
&lt;li>Christoph Blecker (&lt;a href="https://github.com/cblecker">@cblecker&lt;/a>)&lt;/li>
&lt;li>Carlos Tadeu Panato Jr. (&lt;a href="https://github.com/cpanato">@cpanato&lt;/a>)&lt;/li>
&lt;li>Tim Pepper (&lt;a href="https://github.com/tpepper">@tpepper&lt;/a>)&lt;/li>
&lt;/ul>
&lt;p>And thank you to all the candidates who came forward to run for election.&lt;/p>
&lt;h2 id="get-involved-with-the-steering-committee">Get Involved with the Steering Committee&lt;/h2>
&lt;p>This governing body, like all of Kubernetes, is open to all. You can follow along with Steering Committee &lt;a href="https://github.com/orgs/kubernetes/projects/40">backlog items&lt;/a> and weigh in by filing an issue or creating a PR against their &lt;a href="https://github.com/kubernetes/steering">repo&lt;/a>. They have an open meeting on &lt;a href="https://github.com/kubernetes/steering">the first Monday at 9:30am PT of every month&lt;/a>. They can also be contacted at their public mailing list &lt;a href="mailto:steering@kubernetes.io">steering@kubernetes.io&lt;/a>.&lt;/p>
&lt;p>You can see what the Steering Committee meetings are all about by watching past meetings on the &lt;a href="https://www.youtube.com/playlist?list=PL69nYSiGNLP1yP1B_nd9-drjoxp0Q14qM">YouTube Playlist&lt;/a>.&lt;/p>
&lt;p>If you want to meet some of the newly elected Steering Committee members, join us for the Steering AMA at the &lt;a href="https://k8s.dev/summit">Kubernetes Contributor Summit in Chicago&lt;/a>.&lt;/p>
&lt;hr>
&lt;p>&lt;em>This post was written by the &lt;a href="https://github.com/kubernetes/community/tree/master/communication/contributor-comms">Contributor Comms Subproject&lt;/a>. If you want to write stories about the Kubernetes community, learn more about us.&lt;/em>&lt;/p></description></item><item><title>Blog: Spotlight on SIG ContribEx</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/08/14/sig-contribex-spotlight-2023/</link><pubDate>Mon, 14 Aug 2023 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/08/14/sig-contribex-spotlight-2023/</guid><description>
&lt;p>&lt;strong>Author&lt;/strong>: Fyka Ansari&lt;/p>
&lt;p>Welcome to the world of Kubernetes and its vibrant contributor
community! In this blog post, we&amp;rsquo;ll be shining a spotlight on the
&lt;a href="https://github.com/kubernetes/community/blob/master/sig-contributor-experience/README.md">Special Interest Group for Contributor
Experience&lt;/a>
(SIG ContribEx), an essential component of the Kubernetes project.&lt;/p>
&lt;p>SIG ContribEx in Kubernetes is responsible for developing and
maintaining a healthy and productive community of contributors to the
project. This involves identifying and addressing bottlenecks that may
hinder the project&amp;rsquo;s growth and feature velocity, such as pull request
latency and the number of open pull requests and issues.&lt;/p>
&lt;p>SIG ContribEx works to improve the overall contributor experience by
creating and maintaining guidelines, tools, and processes that
facilitate collaboration and communication among contributors. They
also focus on community building and support, including outreach
programs and mentorship initiatives to onboard and retain new
contributors.&lt;/p>
&lt;p>Ultimately, the role of SIG ContribEx is to foster a welcoming and
inclusive environment that encourages contribution and supports the
long-term sustainability of the Kubernetes project.&lt;/p>
&lt;p>In this blog post, &lt;a href="https://twitter.com/1fyka">Fyka Ansari&lt;/a> interviews
&lt;a href="https://twitter.com/kaslinfields">Kaslin Fields&lt;/a>, a DevRel Engineer
at Google, who is a chair of SIG ContribEx, and &lt;a href="https://twitter.com/MadhavJivrajani">Madhav
Jivrajani&lt;/a>, a Software Engineer
at VMWare who serves as a SIG ContribEx Tech Lead. This interview
covers various aspects of SIG ContribEx, including current
initiatives, exciting developments, and how interested individuals can
get involved and contribute to the group. It provides valuable
insights into the workings of SIG ContribEx and highlights the
importance of its role in the Kubernetes ecosystem.&lt;/p>
&lt;h3 id="introductions">Introductions&lt;/h3>
&lt;p>&lt;strong>Fyka:&lt;/strong> Let&amp;rsquo;s start by diving into your background and how you got
involved in the Kubernetes ecosystem. Can you tell us more about that
journey?&lt;/p>
&lt;p>&lt;strong>Kaslin:&lt;/strong> I first got involved in the Kubernetes ecosystem through
my mentor, Jonathan Rippy, who introduced me to containers during my
early days in tech. Eventually, I transitioned to a team working with
containers, which sparked my interest in Kubernetes when it was
announced. While researching Kubernetes in that role, I eagerly sought
opportunities to engage with the containers/Kubernetes community. It
was not until my subsequent job that I found a suitable role to
contribute consistently. I joined SIG ContribEx, specifically in the
Contributor Comms subproject, to both deepen my knowledge of
Kubernetes and support the community better.&lt;/p>
&lt;p>&lt;strong>Madhav:&lt;/strong> My journey with Kubernetes began when I was a student,
searching for interesting and exciting projects to work on. With my
peers, I discovered open source and attended The New Contributor
Workshop organized by the Kubernetes community. The workshop not only
provided valuable insights into the community structure but also gave
me a sense of warmth and welcome, which motivated me to join and
remain involved. I realized that collaboration is at the heart of
open-source communities, and to get answers and support, I needed to
contribute and do my part. I started working on issues in ContribEx,
particularly focusing on GitHub automation, despite not fully
understanding the task at first. I continued to contribute for various
technical and non-technical aspects of the project, finding it to be
one of the most professionally rewarding experiences in my life.&lt;/p>
&lt;p>&lt;strong>Fyka:&lt;/strong> That&amp;rsquo;s such an inspiration in itself! I&amp;rsquo;m sure beginners who
are reading this got the ultimate motivation to take their first
steps. Embracing the Learning journey, seeking mentorship, and
engaging with the Kubernetes community can pave the way for exciting
opportunities in the tech industry. Your stories proved the importance
of starting small and being proactive, just like Madhav said Don&amp;rsquo;t be
afraid to take on tasks, even if you&amp;rsquo;re uncertain at first.&lt;/p>
&lt;h3 id="primary-goals-and-scope">Primary goals and scope&lt;/h3>
&lt;p>&lt;strong>Fyka:&lt;/strong> Given your experience as a member of SIG ContribEx, could
you tell us a bit about the group&amp;rsquo;s primary goals and initiatives? Its
current focus areas? What do you see as the scope of SIG ContribEx and
the impact it has on the Kubernetes community?&lt;/p>
&lt;p>&lt;strong>Kaslin:&lt;/strong> SIG ContribEx&amp;rsquo;s primary goals are to simplify the
contributions of Kubernetes contributors and foster a welcoming
community. It collaborates with other Kubernetes SIGs, such as
planning the Contributor Summit at KubeCon, ensuring it meets the
needs of various groups. The group&amp;rsquo;s impact is evident in projects
like updating org membership policies and managing critical platforms
like Zoom, YouTube, and Slack. Its scope encompasses making the
contributor experience smoother and supporting the overall Kubernetes
community.&lt;/p>
&lt;p>&lt;strong>Madhav:&lt;/strong> The Kubernetes project has vertical SIGs and cross-cutting
SIGs, ContribEx is a deeply cross-cutting SIG, impacting virtually
every area of the Kubernetes community. Adding to Kaslin,
sustainability in the Kubernetes project and community is critical now
more than ever, it plays a central role in addressing critical issues,
such as maintainer succession, by facilitating cohorts for SIGs to
train experienced community members to take on leadership
roles. Excellent examples include SIG CLI and SIG Apps, leading to the
onboarding of new reviewers. Additionally, SIG ContribEx is essential
in managing GitHub automation tools, including bots and commands used
by contributors for interacting with &lt;a href="https://docs.prow.k8s.io/">Prow&lt;/a>
and other automation (label syncing, group and GitHub team management,
etc).&lt;/p>
&lt;h3 id="beginners-guide">Beginner&amp;rsquo;s guide!&lt;/h3>
&lt;p>&lt;strong>Fyka:&lt;/strong> I&amp;rsquo;ll never forget talking to Kaslin when I joined the
community and needed help with contributing. Kaslin, your quick and
clear answers were a huge help in getting me started. Can you both
give some tips for people new to contributing to Kubernetes? What
makes SIG ContribEx a great starting point? Why should beginners and
current contributors consider it? And what cool opportunities are
there for newbies to jump in?&lt;/p>
&lt;p>&lt;strong>Kaslin:&lt;/strong> If you want to contribute to Kubernetes for the first
time, it can be overwhelming to know where to start. A good option is
to join SIG ContribEx as it offers great opportunities to know and
serve the community. Within SIG ContribEx, various subprojects allow
you to explore different parts of the Kubernetes project while you
learn how contributions work. Once you know a bit more, it’s common
for you to move to other SIGs within the project, and we think that’s
wonderful. While many newcomers look for &amp;ldquo;good first issues&amp;rdquo; to start
with, these opportunities can be scarce and get claimed
quickly. Instead, the real benefit lies in attending meetings and
getting to know the community. As you learn more about the project and
the people involved, you&amp;rsquo;ll be better equipped to offer your help, and
the community will be more inclined to seek your assistance when
needed. As a co-lead for the Contributor Comms subproject, I can
confidently say that it&amp;rsquo;s an excellent place for beginners to get
involved. We have supportive leads and particularly beginner-friendly
projects too.&lt;/p>
&lt;p>&lt;strong>Madhav:&lt;/strong> To begin, read the &lt;a href="https://github.com/kubernetes/community/tree/master#readme">SIG
README&lt;/a> on
GitHub, which provides an overview of the projects the SIG
manages. While attending meetings is beneficial for all SIGs, it&amp;rsquo;s
especially recommended for SIG ContribEx, as each subproject gets
dedicated slots for updates and areas that need help. If you can&amp;rsquo;t
attend in real-time due to time zone differences, you can catch the
meeting recordings or
&lt;a href="https://docs.google.com/document/d/1K3vjCZ9C3LwYrOJOhztQtFuDQCe-urv-ewx1bI8IPVQ/edit?usp=sharing">Notes&lt;/a>
later.&lt;/p>
&lt;h3 id="skills-you-learn">Skills you learn!&lt;/h3>
&lt;p>&lt;strong>Fyka:&lt;/strong> What skills do you look for when bringing in new
contributors to SIG ContribEx, from passion to expertise?
Additionally, what skills can contributors expect to develop while
working with SIG ContribEx?&lt;/p>
&lt;p>&lt;strong>Kaslin:&lt;/strong> Skills folks need to have or will acquire vary depending
on what area of ContribEx they work upon. Even within a subproject, a
range of skills can be useful and/or developed. For example, the tech
lead role involves technical tasks and overseeing automation, while
the social media lead role requires excellent communication
skills. Working with SIG ContribEx allows contributors to acquire
various skills based on their chosen subproject. By participating in
meetings, listening, learning, and taking on tasks related to their
interests, they can develop and hone these skills. Some subprojects
may require more specialized skills, like program management for the
mentoring project, but all contributors can benefit from offering
their talents to help teach others and contribute to the community.&lt;/p>
&lt;h3 id="sub-projects-under-sig-contribex">Sub-projects under SIG ContribEx&lt;/h3>
&lt;p>&lt;strong>Fyka:&lt;/strong> SIG ContribEx has several smaller projects. Can you tell me
about the aims of these projects and how they&amp;rsquo;ve impacted the
Kubernetes community?&lt;/p>
&lt;p>&lt;strong>Kaslin:&lt;/strong> Some SIGs have one or two subprojects and some have none
at all, but in SIG ContribEx, we have &lt;strong>eleven&lt;/strong>!&lt;/p>
&lt;p>Here’s a list of them and their respective mission statements&lt;/p>
&lt;ol>
&lt;li>&lt;strong>Community&lt;/strong>: Manages the community repository, documentation,
and operations.&lt;/li>
&lt;li>&lt;strong>Community management&lt;/strong>: Handles communication platforms and
policies for the community.&lt;/li>
&lt;li>&lt;strong>Contributor-comms&lt;/strong>: Focuses on promoting the success of
Kubernetes contributors through marketing.&lt;/li>
&lt;li>&lt;strong>Contributors-documentation&lt;/strong>: Writes and maintains documentation
for contributing to Kubernetes.&lt;/li>
&lt;li>&lt;strong>Devstats&lt;/strong>: Maintains and updates the &lt;a href="https://k8s.devstats.cncf.io">Kubernetes
statistics&lt;/a> website.&lt;/li>
&lt;li>&lt;strong>Elections&lt;/strong>: Oversees community elections and maintains related
documentation and software.&lt;/li>
&lt;li>&lt;strong>Events&lt;/strong>: Organizes contributor-focused events like the
Contributor Summit.&lt;/li>
&lt;li>&lt;strong>Github management&lt;/strong>: Manages permissions, repositories, and
groups on GitHub.&lt;/li>
&lt;li>&lt;strong>Mentoring&lt;/strong>: Develop programs to help contributors progress in
their contributions.&lt;/li>
&lt;li>&lt;strong>Sigs-GitHub-actions&lt;/strong>: Repository for GitHub actions related to
all SIGs in Kubernetes.&lt;/li>
&lt;li>&lt;strong>Slack-infra&lt;/strong>: Creates and maintains tools and automation for
Kubernetes Slack.&lt;/li>
&lt;/ol>
&lt;p>&lt;strong>Madhav:&lt;/strong> Also, Devstats is critical from a sustainability
standpoint!&lt;/p>
&lt;p>&lt;em>(If you are willing to learn more and get involved with any of these
sub-projects, check out the&lt;/em> &lt;a href="https://github.com/kubernetes/community/blob/master/sig-contributor-experience/README.md#subprojects">SIG ContribEx
README&lt;/a>).&lt;/p>
&lt;h3 id="accomplishments">Accomplishments&lt;/h3>
&lt;p>&lt;strong>Fyka:&lt;/strong> With that said, any SIG-related accomplishment that you’re
proud of?&lt;/p>
&lt;p>&lt;strong>Kaslin:&lt;/strong> I&amp;rsquo;m proud of the accomplishments made by SIG ContribEx and
its contributors in supporting the community. Some of the recent
achievements include:&lt;/p>
&lt;ol>
&lt;li>&lt;em>Establishment of the elections subproject&lt;/em>: Kubernetes is a massive
project, and ensuring smooth leadership transitions is
crucial. The contributors in this subproject organize fair and
consistent elections, which helps keep the project running
effectively.&lt;/li>
&lt;li>&lt;em>New issue triage proces&lt;/em>: With such a large open-source project
like Kubernetes, there&amp;rsquo;s always a lot of work to be done. To
ensure things progress safely, we implemented new labels and
updated functionality for issue triage using our PROW tool. This
reduces bottlenecks in the workflow and allows leaders to
accomplish more.&lt;/li>
&lt;li>&lt;em>New org membership requirements&lt;/em>: Becoming an org member in
Kubernetes can be overwhelming for newcomers. We view org
membership as a significant milestone for contributors aiming to
take on leadership roles. We recently updated the rules to
automatically remove privileges from inactive members, making sure
that the right people have access to the necessary tools and
responsibilities.&lt;/li>
&lt;/ol>
&lt;p>Overall, these accomplishments have greatly benefited our fellow
contributors and strengthened the Kubernetes community.&lt;/p>
&lt;h3 id="upcoming-initiatives">Upcoming initiatives&lt;/h3>
&lt;p>&lt;strong>Fyka:&lt;/strong> Could you give us a sneak peek into what&amp;rsquo;s next for the
group? We&amp;rsquo;re excited to hear about upcoming projects and initiatives
from this dynamic team.&lt;/p>
&lt;p>&lt;strong>Madhav:&lt;/strong> We’d love for more groups to sign up for mentoring
cohorts! We’re probably going to have to spend some time polishing the
process around that.&lt;/p>
&lt;h3 id="final-thoughts">Final thoughts&lt;/h3>
&lt;p>&lt;strong>Fyka:&lt;/strong> As we wrap up our conversation, would you like to share some
final thoughts for those interested in contributing to SIG ContribEx
or getting involved with Kubernetes?&lt;/p>
&lt;p>&lt;strong>Madhav&lt;/strong>: Kubernetes is meant to be overwhelming and difficult
initially! You’re coming into something that’s taken multiple people,
from multiple countries, multiple years to build. Embrace that
diversity! Use the high entropy initially to collide around and gain
as much knowledge about the project and community as possible before
you decide to settle in your niche.&lt;/p>
&lt;p>&lt;strong>Fyka:&lt;/strong> Thank You Madhav and Kaslin, it was an absolute pleasure
chatting about SIG ContribEx and your experiences as a member. It&amp;rsquo;s
clear that the role of SIG ContribEx in Kubernetes is significant and
essential, ensuring scalability, growth and productivity, and I hope
this interview inspires more people to get involved and contribute to
Kubernetes. I wish SIG ContribEx all the best, and can&amp;rsquo;t wait to see
what exciting things lie ahead!&lt;/p>
&lt;h2 id="what-next">What next?&lt;/h2>
&lt;p>We love meeting new contributors and helping them in investigating
different Kubernetes project spaces. If you are interested in getting
more involved with SIG ContribEx, here are some resources for you to
get started:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://github.com/kubernetes/community/tree/master/sig-contributor-experience#contributor-experience-special-interest-group">GitHub&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://groups.google.com/g/kubernetes-sig-contribex">Mailing list&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://github.com/kubernetes/community/labels/sig%2Fcontributor-experience">Open Community
Issues/PRs&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://slack.k8s.io/">Slack&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://kubernetes.slack.com/messages/sig-contribex">Slack channel
#sig-contribex&lt;/a>&lt;/li>
&lt;li>SIG Contribex also hosted a &lt;a href="https://youtu.be/5Bs1bs6iFmY">KubeCon
talk&lt;/a> about studying Kubernetes
Contributor experiences.&lt;/li>
&lt;/ul></description></item><item><title>Blog: Spotlight on SIG CLI</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/07/20/sig-cli-spotlight-2023/</link><pubDate>Thu, 20 Jul 2023 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/07/20/sig-cli-spotlight-2023/</guid><description>
&lt;p>In the world of Kubernetes, managing containerized applications at
scale requires powerful and efficient tools. The command-line
interface (CLI) is an integral part of any developer or operator’s
toolkit, offering a convenient and flexible way to interact with a
Kubernetes cluster.&lt;/p>
&lt;p>SIG CLI plays a crucial role in improving the &lt;a href="https://github.com/kubernetes/community/tree/master/sig-cli">Kubernetes
CLI&lt;/a>
experience by focusing on the development and enhancement of
&lt;code>kubectl&lt;/code>, the primary command-line tool for Kubernetes.&lt;/p>
&lt;p>In this SIG CLI Spotlight, Arpit Agrawal, SIG ContribEx-Comms team
member, talked with &lt;a href="https://github.com/KnVerey">Katrina Verey&lt;/a>, Tech
Lead &amp;amp; Chair of SIG CLI,and &lt;a href="https://github.com/soltysh">Maciej
Szulik&lt;/a>, SIG CLI Batch Lead, about SIG
CLI, current projects, challenges and how anyone can get involved.&lt;/p>
&lt;p>So, whether you are a seasoned Kubernetes enthusiast or just getting
started, understanding the significance of SIG CLI will undoubtedly
enhance your Kubernetes journey.&lt;/p>
&lt;h2 id="introductions">Introductions&lt;/h2>
&lt;p>&lt;strong>Arpit&lt;/strong>: Could you tell us a bit about yourself, your role, and how
you got involved in SIG CLI?&lt;/p>
&lt;p>&lt;strong>Maciej&lt;/strong>: I’m one of the technical leads for SIG-CLI. I was working
on Kubernetes in multiple areas since 2014, and in 2018 I got
appointed a lead.&lt;/p>
&lt;p>&lt;strong>Katrina&lt;/strong>: I’ve been working with Kubernetes as an end-user since
2016, but it was only in late 2019 that I discovered how well SIG CLI
aligned with my experience from internal projects. I started regularly
attending meetings and made a few small PRs, and by 2021 I was working
more deeply with the
&lt;a href="https://github.com/kubernetes-sigs/kustomize">Kustomize&lt;/a> team
specifically. Later that year, I was appointed to my current roles as
subproject owner for Kustomize and KRM Functions, and as SIG CLI Tech
Lead and Chair.&lt;/p>
&lt;h2 id="about-sig-cli">About SIG CLI&lt;/h2>
&lt;p>&lt;strong>Arpit&lt;/strong>: Thank you! Could you share with us the purpose and goals of SIG CLI?&lt;/p>
&lt;p>&lt;strong>Maciej&lt;/strong>: Our
&lt;a href="https://github.com/kubernetes/community/tree/master/sig-cli/">charter&lt;/a>
has the most detailed description, but in few words, we handle all CLI
tooling that helps you manage your Kubernetes manifests and interact
with your Kubernetes clusters.&lt;/p>
&lt;p>&lt;strong>Arpit&lt;/strong>: I see. And how does SIG CLI work to promote best-practices
for CLI development and usage in the cloud native ecosystem?&lt;/p>
&lt;p>&lt;strong>Maciej&lt;/strong>: Within &lt;code>kubectl&lt;/code>, we have several on-going efforts that
try to encourage new contributors to align existing commands to new
standards. We publish several libraries which hopefully make it easier
to write CLIs that interact with Kubernetes APIs, such as cli-runtime
and
&lt;a href="https://github.com/kubernetes-sigs/kustomize/tree/master/kyaml">kyaml&lt;/a>.&lt;/p>
&lt;p>&lt;strong>Katrina&lt;/strong>: We also maintain some interoperability specifications for
CLI tooling, such as the &lt;a href="https://github.com/kubernetes-sigs/kustomize/blob/master/cmd/config/docs/api-conventions/functions-spec.md">KRM Functions
Specification&lt;/a>
(GA) and the new ApplySet
Specification
(alpha).&lt;/p>
&lt;h2 id="current-projects-and-challenges">Current projects and challenges&lt;/h2>
&lt;p>&lt;strong>Arpit&lt;/strong>: Going through the README file, it’s clear SIG CLI has a
number of subprojects, could you highlight some important ones?&lt;/p>
&lt;p>&lt;strong>Maciej&lt;/strong>: The four most active subprojects that are, in my opinion,
worthy of your time investment would be:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://github.com/kubernetes/kubectl">&lt;code>kubectl&lt;/code>&lt;/a>: the canonical Kubernetes CLI.&lt;/li>
&lt;li>&lt;a href="https://github.com/kubernetes-sigs/kustomize">Kustomize&lt;/a>: a
template-free customization tool for Kubernetes yaml manifest files.&lt;/li>
&lt;li>&lt;a href="https://kui.tools">KUI&lt;/a> - a GUI interface to Kubernetes, think
&lt;code>kubectl&lt;/code> on steroids.&lt;/li>
&lt;li>&lt;a href="https://github.com/kubernetes-sigs/krew">&lt;code>krew&lt;/code>&lt;/a>: a plugin manager for &lt;code>kubectl&lt;/code>.&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Arpit&lt;/strong>: Are there any upcoming initiatives or developments that SIG
CLI is working on?&lt;/p>
&lt;p>&lt;strong>Maciej&lt;/strong>: There are always several initiatives we’re working on at
any given point in time. It’s best to join &lt;a href="https://github.com/kubernetes/community/tree/master/sig-cli/#meetings">one of our
calls&lt;/a>
to learn about the current ones.&lt;/p>
&lt;p>&lt;strong>Katrina&lt;/strong>: For major features, you can check out &lt;a href="https://www.kubernetes.dev/resources/keps/">our open
KEPs&lt;/a>. For instance, in
1.27 we introduced alphas for &lt;a href="https://kubernetes.io/blog/2023/05/09/introducing-kubectl-applyset-pruning/">a new pruning mode in kubectl
apply&lt;/a>,
and for kubectl create plugins. Exciting ideas that are currently
under discussion include an interactive mode for &lt;code>kubectl&lt;/code> delete
(&lt;a href="https://kubernetes.io/blog/2023/05/09/introducing-kubectl-applyset-pruning">KEP
3895&lt;/a>)
and the &lt;code>kuberc&lt;/code> user preferences file (&lt;a href="https://kubernetes.io/blog/2023/05/09/introducing-kubectl-applyset-pruning">KEP
3104&lt;/a>).&lt;/p>
&lt;p>&lt;strong>Arpit&lt;/strong>: Could you discuss any challenges that SIG CLI faces in its
efforts to improve CLIs for cloud-native technologies? What are the
future efforts to solve them?&lt;/p>
&lt;p>&lt;strong>Katrina&lt;/strong>: The biggest challenge we’re facing with every decision is
backwards compatibility and ensuring we don’t break existing users. It
frequently happens that fixing what&amp;rsquo;s on the surface may seem
straightforward, but even fixing a bug could constitute a breaking
change for some users, which means we need to go through an extended
deprecation process to change it, or in some cases we can’t change it
at all. Another challenge is the need to balance customization with
usability in the flag sets we expose on our tools. For example, we get
many proposals for new flags that would certainly be useful to some
users, but not a large enough subset to justify the increased
complexity having them in the tool entails for everyone. The &lt;code>kuberc&lt;/code>
proposal may help with some of these problems by giving individual
users the ability to set or override default values we can’t change,
and even create custom subcommands via aliases&lt;/p>
&lt;p>&lt;strong>Arpit&lt;/strong>: With every new version release of Kubernetes, maintaining
consistency and integrity is surely challenging: how does the SIG CLI
team tackle it?&lt;/p>
&lt;p>&lt;strong>Maciej&lt;/strong>: This is mostly similar to the topic mentioned in the
previous question: every new change, especially to existing commands
goes through a lot of scrutiny to ensure we don’t break existing
users. At any point in time we have to keep a reasonable balance
between features and not breaking users.&lt;/p>
&lt;h2 id="future-plans-and-contribution">Future plans and contribution&lt;/h2>
&lt;p>&lt;strong>Arpit&lt;/strong>: How do you see the role of CLI tools in the cloud-native
ecosystem evolving in the future?&lt;/p>
&lt;p>&lt;strong>Maciej&lt;/strong>: I think that CLI tools were and will always be an
important piece of the ecosystem. Whether used by administrators on
remote machines that don’t have GUI or in every CI/CD pipeline, they
are irreplaceable.&lt;/p>
&lt;p>&lt;strong>Arpit&lt;/strong>: Kubernetes is a community-driven project. Any
recommendation for anyone looking into getting involved in SIG CLI
work? Where should they start? Are there any prerequisites?&lt;/p>
&lt;p>&lt;strong>Maciej&lt;/strong>: There are no prerequisites other than a little bit of free
time on your hands and willingness to learn something new :-)&lt;/p>
&lt;p>&lt;strong>Katrina&lt;/strong>: A working knowledge of &lt;a href="https://go.dev/">Go&lt;/a> often helps,
but we also have areas in need of non-code contributions, such as the
&lt;a href="https://github.com/kubernetes-sigs/kustomize/issues/4338">Kustomize docs consolidation
project&lt;/a>.&lt;/p></description></item><item><title>Blog: Spotlight on SIG Network</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/05/09/sig-network-spotlight-2023/</link><pubDate>Tue, 09 May 2023 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/05/09/sig-network-spotlight-2023/</guid><description>
&lt;p>Networking is one of the core pillars of Kubernetes, and the Special Interest
Group for Networking (SIG Network) is responsible for developing and maintaining
the networking features of Kubernetes. It covers all aspects to ensure
Kubernetes provides a reliable and scalable network infrastructure for
containerized applications.&lt;/p>
&lt;p>In this SIG Network spotlight, &lt;a href="https://twitter.com/Sujaystwt">Sujay Dey&lt;/a> talked
with &lt;a href="https://twitter.com/ShaneUtt">Shane Utt&lt;/a>, Software Engineer at Kong, chair
of SIG Network and maintainer of Gateway API, on different aspects of the SIG,
what are the exciting things going on and how anyone can get involved and
contribute here.&lt;/p>
&lt;p>&lt;strong>Sujay&lt;/strong>: Hello, and first of all, thanks for the opportunity of learning more
about SIG Network. I would love to hear your story, so could you please tell us
a bit about yourself, your role, and how you got involved in Kubernetes,
especially in SIG Network?&lt;/p>
&lt;p>&lt;strong>Shane&lt;/strong>: Hello! Thank you for reaching out.&lt;/p>
&lt;p>My Kubernetes journey started while I was working for a small data centre: we
were early adopters of Kubernetes and focused on using Kubernetes to provide
SaaS products. That experience led to my next position developing a distribution
of Kubernetes with a focus on networking. During this period in my career, I was
active in SIG Network (predominantly as a consumer).&lt;/p>
&lt;p>When I joined &lt;a href="https://konghq.com/">Kong&lt;/a> my role in the community changed significantly, as
Kong actively encourages upstream participation. I greatly increased my
engagement and contributions to the &lt;a href="https://gateway-api.sigs.k8s.io/">Gateway API&lt;/a> project during those
years, and eventually became a maintainer.&lt;/p>
&lt;p>I care deeply about this community and the future of our technology, so when a
chair position for the SIG became available, I volunteered my time immediately.
I&amp;rsquo;ve enjoyed working on Kubernetes over the better part of a decade and I want
to continue to do my part to ensure our community and technology continues to
flourish.&lt;/p>
&lt;p>&lt;strong>Sujay&lt;/strong>: I have to say, that was a truely inspiring journey! Now, let us talk
a bit more about SIG Network. Since we know it covers a lot of ground, could you
please highlight its scope and current focus areas?&lt;/p>
&lt;p>&lt;strong>Shane&lt;/strong>: For those who may be uninitiated: SIG Network is responsible for the
components, interfaces, and APIs which expose networking capabilities to
Kubernetes users and workloads. The &lt;a href="https://github.com/kubernetes/community/blob/master/sig-network/charter.md">charter&lt;/a> is a pretty good
indication of our scope, but I can add some additional highlights on some of our
current areas of focus (this is a non-exhaustive list of sub-projects):&lt;/p>
&lt;p>&lt;em>&lt;strong>kube-proxy &amp;amp; KPNG&lt;/strong>&lt;/em>&lt;/p>
&lt;p>Those familiar with Kubernetes will know the Service API, which enables
exposing a group of pods over a network. The current standard implementation
of Service is known as &lt;code>kube-proxy&lt;/code>, but what may be unfamiliar to people is
that there are a growing number of disparate alternative implementations on the
rise in recent years. To try and give provisions to these implementations (and
also provide some areas of alignment so that implementations do not become too
disparate from each other) upstream Kubernetes efforts are underway to create a
more modular public interface for &lt;code>kube-proxy&lt;/code>. The intention is for
implementations to join in around a common set of libraries and speak a common
language. This area of focus is known as the KPNG project, and if this sounds
interesting to you, please join us in the KPNG &lt;a href="https://github.com/kubernetes/community/blob/master/sig-network/README.md#meetings">community meetings&lt;/a> and
&lt;code>#sig-network-kpng&lt;/code> on &lt;a href="https://kubernetes.slack.com/">Kubernetes Slack&lt;/a>.&lt;/p>
&lt;p>&lt;em>&lt;strong>Multi-Network&lt;/strong>&lt;/em>&lt;/p>
&lt;p>Today one of the primary requirements for Kubernetes networking is to achieve
connectivity between pods in a cluster, satisfying a large number of
Kubernetes end-users. However, some use cases require isolated networks and
special interfaces for performance-oriented needs (e.g. &lt;code>AF_XDP&lt;/code>, &lt;code>memif&lt;/code>,
&lt;code>SR-IOV&lt;/code>). There&amp;rsquo;s a growing need for special networking configurations in
Kubernetes in general. The Multi-Network project exists to improve the
management of multiple different networks for pods: anyone interested in some
of the lower-level details of pod networking (or anyone having relevant use
cases) can join us in the Multi-Network community meetings and
&lt;code>#sig-network-multi-network&lt;/code> on Kubernetes Slack.&lt;/p>
&lt;p>&lt;em>&lt;strong>Network Policy&lt;/strong>&lt;/em>&lt;/p>
&lt;p>The &lt;code>NetworkPolicy&lt;/code> API sub-group was formed to address network security beyond
the well-known version 1 of the &lt;code>NetworkPolicy&lt;/code> resource. We&amp;rsquo;ve also been
working on the &lt;code>AdminNetworkPolicy&lt;/code> resource (previously known as
&lt;code>ClusterNetworkPolicy&lt;/code>) to provide cluster administrator-focused functionality.
The network policy sub-project is a great place to join in if you&amp;rsquo;re
particularly interested in security and CNI, please feel free to join our
community meetings and the &lt;code>#sig-network-policy-api&lt;/code> channel on Kubernetes
Slack.&lt;/p>
&lt;p>&lt;em>&lt;strong>Gateway API&lt;/strong>&lt;/em>&lt;/p>
&lt;p>If you&amp;rsquo;re specially interested in &lt;strong>ingress&lt;/strong> or &lt;strong>mesh&lt;/strong> networking the &lt;a href="https://gateway-api.sigs.k8s.io/">Gateway
API&lt;/a> may be a sub-project you would enjoy. In Gateway API , we&amp;rsquo;re actively
developing the successor to the illustrious Ingress API, which includes a
Gateway resource which defines the addresses and listeners of the gateway and
various routing types (e.g. &lt;code>HTTPRoute&lt;/code>, &lt;code>GRPCRoute&lt;/code>, &lt;code>TLSRoute&lt;/code>, &lt;code>TCPRoute&lt;/code>,
&lt;code>UDPRoute&lt;/code>, etc.) that attach to Gateways. We also have an initiative within
this project called GAMMA, geared towards using Gateway API resources in a mesh
network context. There are some up-and-coming side projects within Gateway API
as well, including &lt;code>ingress2gateway&lt;/code> which is a tool for compiling existing
Ingress objects to equivalent Gateway API resources, and Blixt, a Layer4
implementation of Gateway API using Rust/eBPF for the data plane, intended as a
testing and reference implementation. If this sounds interesting, we would love
to have readers join us in our Gateway API community meetings and
&lt;code>#sig-network-gateway-api&lt;/code> on Kubernetes Slack.&lt;/p>
&lt;p>&lt;strong>Sujay&lt;/strong>: Couldn’t agree more! That was a very informative description, thanks
for highlighting them so nicely. As you have already mentioned about the SIG
channels to get involved, would you like to add anything about where people like
beginners can jump in and contribute?&lt;/p>
&lt;p>&lt;strong>Shane&lt;/strong>: For help getting started &lt;a href="https://kubernetes.slack.com/">Kubernetes Slack&lt;/a> is a great place
to talk to community members and includes several &lt;code>#sig-network-&amp;lt;project&amp;gt;&lt;/code>
channels as well as our main &lt;code>#sig-network&lt;/code> channel. Also, check for issues
labelled &lt;code>good-first-issue&lt;/code> if you prefer to just dive right into the
repositories. Let us know how we can help you!&lt;/p>
&lt;p>&lt;strong>Sujay&lt;/strong>: What skills are contributors to SIG Network likely to learn?&lt;/p>
&lt;p>&lt;strong>Shane&lt;/strong>: To me, it feels limitless. Practically speaking, it&amp;rsquo;s very much up to
the individual what they &lt;em>want&lt;/em> to learn. However, if you just intend to learn
as much as you possibly can about networking, SIG Network is a great place to
join in and grow your knowledge.&lt;/p>
&lt;p>If you&amp;rsquo;ve ever wondered how Kubernetes Service API works or wanted to
implement an ingress controller, this is a great place to join in. If you wanted
to dig down deep into the inner workings of CNI, or how the network interfaces
at the pod level are configured, you can do that here as well.&lt;/p>
&lt;p>We have an awesome and diverse community of people from just about every kind of
background you can imagine. This is a great place to share ideas and raise
proposals, improving your skills in design, as well as alignment and consensus
building.&lt;/p>
&lt;p>There&amp;rsquo;s a wealth of opportunities here in SIG Network. There are lots of places
to jump in, and the learning opportunities are boundless.&lt;/p>
&lt;p>&lt;strong>Sujay&lt;/strong>: Thanks a lot! It was a really great discussion, we got to know so
many great things about SIG Network. I&amp;rsquo;m sure that many others will find this
just as useful as I did.&lt;/p></description></item><item><title>Blog: E2E Testing Best Practices, Reloaded</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/04/12/e2e-testing-best-practices-reloaded/</link><pubDate>Wed, 12 Apr 2023 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/04/12/e2e-testing-best-practices-reloaded/</guid><description>
&lt;p>End-to-end (E2E) testing in Kubernetes is how the project validates
functionality with real clusters. Contributors sooner or later encounter it
when asked to write E2E tests for new features or to help with debugging test
failures. Cluster admins or vendors might run the conformance tests, a subset
of all tests in the &lt;a href="https://github.com/kubernetes/kubernetes/tree/v1.27.0-rc.0/test/e2e">E2E test
suite&lt;/a>.&lt;/p>
&lt;p>The underlying &lt;a href="https://github.com/kubernetes/kubernetes/tree/v1.27.0-rc.0/test/e2e/framework">E2E
framework&lt;/a>
for writing these E2E tests has been around for a long
time. Functionality was added to it as needed, leading to code that became hard
to maintain and use. The &lt;a href="https://github.com/kubernetes/community/blob/master/sig-testing/README.md#testing-commons">testing commons
WG&lt;/a>
started cleaning it up, but dissolved before completely achieving their
goals.&lt;/p>
&lt;p>After the &lt;a href="https://github.com/kubernetes/kubernetes/pull/109111">migration to Gingko
v2&lt;/a> in Kubernetes 1.25, I
picked up several of the loose ends and started untangling them. This blog post
is a summary of those changes. Some of this content is also found in the
Kubernetes contributor document about &lt;a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-testing/writing-good-e2e-tests.md">writing good E2E
tests&lt;/a>
and gets reproduced here to raise awareness that the document has been updated.&lt;/p>
&lt;h2 id="overall-architecture">Overall architecture&lt;/h2>
&lt;p>At the moment, the framework is used in-tree for testing against a cluster
(&lt;code>test/e2e&lt;/code>), testing kubeadm (&lt;code>test/e2e_kubeadm&lt;/code>) and kubelet
(&lt;code>test/e2e_node&lt;/code>). The goal is to make the core &lt;code>test/e2e/framework&lt;/code> a package
that has no dependencies on internal code and that can be used in different E2E
suites without polluting them with features or options that make no sense for
them. This is currently only a &lt;em>technical&lt;/em> goal. There are no plans anymore to
actually move the code into a staging repository.&lt;/p>
&lt;p>The framework acts like a normal client of an apiserver and thus doesn&amp;rsquo;t need
much more than client-go. Since &lt;a href="https://github.com/kubernetes/kubernetes/pull/112043">the sub-package
refacoring&lt;/a>, additional
sub-packages like &lt;code>test/e2e/framework/pod&lt;/code> depend on the framework, not the
other way around. Those other sub-packages therefore can still use internal
code. The import boss configuration enforces &lt;a href="https://github.com/kubernetes/kubernetes/pull/115710">these
constraints&lt;/a>.&lt;/p>
&lt;p>What&amp;rsquo;s left to clean up is that the framework contains a &lt;code>TestContext&lt;/code> with
fields that are used only by some tests or some test suites. The &lt;a href="https://github.com/kubernetes/kubernetes/blob/330b5a2b8dbd681811cb8235947557c99dd8e593/test/e2e/framework/test_context.go#L237-L263">configuration
for &lt;code>test/e2e_node&lt;/code>&lt;/a>
is the last remaining dependency on internal code. Such settings should get
moved into the different test suites and/or tests. The advantage besides
avoiding such dependencies will be that they will only show up in the command
line of a suite when the option really has an effect.&lt;/p>
&lt;h2 id="debuggability">Debuggability&lt;/h2>
&lt;p>If your test fails, it should provide as detailed as possible reasons for the
failure in its failure message. The failure message is the string that gets
passed (directly or indirectly) to &lt;code>ginkgo.Fail[f]&lt;/code>. That text is what gets
shown in the overview of failed tests for a Prow job and what gets aggregated
by &lt;a href="https://go.k8s.io/triage">https://go.k8s.io/triage&lt;/a>.&lt;/p>
&lt;p>A good failure message:&lt;/p>
&lt;ul>
&lt;li>identifies the test failure&lt;/li>
&lt;li>has enough details to provide some initial understanding of what went wrong&lt;/li>
&lt;/ul>
&lt;p>It&amp;rsquo;s okay for it to contain information that changes during each test
run. Aggregation &lt;a href="https://github.com/kubernetes/test-infra/blob/d56bc333ae8acf176887a3249f750e7a8e0377f0/triage/summarize/text.go#L39-L69">simplifies the failure message with regular
expressions&lt;/a>
before looking for similar failures.&lt;/p>
&lt;p>Helper libraries like &lt;a href="https://onsi.github.io/gomega/">Gomega&lt;/a> or
&lt;a href="https://pkg.go.dev/github.com/stretchr/testify">testify&lt;/a> can be used to
produce informative failure messages. Gomega is a bit easier to use in
combination with Ginkgo.&lt;/p>
&lt;p>The E2E framework itself only has one helper function for assertions that is
still recommended. The others are deprecated. Compared to
&lt;code>gomega.Expect(err).NotTo(gomega.HaveOccurred())&lt;/code>,
&lt;code>framework.ExpectNoError(err)&lt;/code> is shorter and produces better failure
messages because it logs the full error and then includes only the shorter
&lt;code>err.Error()&lt;/code> in the failure message.&lt;/p>
&lt;p>As with any other assertion, it is recommended to include additional context in
cases where the parameters being checked by an assertion helper lack relevant
information:&lt;/p>
&lt;pre tabindex="0">&lt;code>framework.ExpectNoError(err, &amp;#34;tried creating %d foobars, only created %d&amp;#34;, foobarsReqd, foobarsCreated)
&lt;/code>&lt;/pre>&lt;p>Use assertions that match the check in the test. Using Go
code to evaluate some condition and then checking the result often isn&amp;rsquo;t
informative. For example this check should be avoided:&lt;/p>
&lt;pre tabindex="0">&lt;code>gomega.Expect(strings.Contains(actualStr, expectedSubStr)).To(gomega.Equal(true))
&lt;/code>&lt;/pre>&lt;p>&lt;a href="https://github.com/kubernetes/kubernetes/issues/105678">Comparing a boolean&lt;/a>
like this against &lt;code>true&lt;/code> or &lt;code>false&lt;/code> with &lt;code>gomega.Equal&lt;/code> or
&lt;code>framework.ExpectEqual&lt;/code> is not useful because dumping the actual and expected
value just distracts from the underlying failure reason.
Better pass the actual values to Gomega, which will automatically include them in the
failure message. Add an annotation that explains what the assertion is about:&lt;/p>
&lt;pre tabindex="0">&lt;code>gomega.Expect(actualStr).To(gomega.ContainSubstring(&amp;#34;xyz&amp;#34;), &amp;#34;checking log output&amp;#34;)
&lt;/code>&lt;/pre>&lt;p>This produces the following failure message:&lt;/p>
&lt;pre tabindex="0">&lt;code> [FAILED] checking log output
Expected
&amp;lt;string&amp;gt;: hello world
to contain substring
&amp;lt;string&amp;gt;: xyz
&lt;/code>&lt;/pre>&lt;p>If there is no suitable Gomega assertion, call &lt;code>ginkgo.Failf&lt;/code> directly:&lt;/p>
&lt;pre tabindex="0">&lt;code>import &amp;#34;github.com/onsi/gomega/format&amp;#34;
ok := someCustomCheck(abc)
if !ok {
ginkgo.Failf(&amp;#34;check xyz failed for object:\n%s&amp;#34;, format.Object(abc))
}
&lt;/code>&lt;/pre>&lt;p>It is good practice to include details like the object that failed some
assertion in the failure message because then a) the information is available
when analyzing a failure that occurred in the CI and b) it only gets logged
when some assertion fails. Always dumping objects via log messages can make the
test output very large and may distract from the relevant information.&lt;/p>
&lt;p>Dumping structs with &lt;code>format.Object&lt;/code> is recommended. Starting with Kubernetes
1.26, &lt;code>format.Object&lt;/code> will pretty-print Kubernetes API objects or structs &lt;a href="https://github.com/kubernetes/kubernetes/pull/113384">as
YAML and omit unset
fields&lt;/a>, which is more
readable than other alternatives like &lt;code>fmt.Sprintf(&amp;quot;%+v&amp;quot;)&lt;/code>.&lt;/p>
&lt;pre>&lt;code>import (
&amp;quot;fmt&amp;quot;
&amp;quot;k8s.io/api/core/v1&amp;quot;
&amp;quot;k8s.io/kubernetes/test/utils/format&amp;quot;
)
var pod v1.Pod
fmt.Printf(&amp;quot;Printf: %+v\n\n&amp;quot;, pod)
fmt.Printf(&amp;quot;format.Object:\n%s&amp;quot;, format.Object(pod, 1 /* indent one level */))
=&amp;gt;
Printf: {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name: GenerateName: Namespace: SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp:&amp;lt;nil&amp;gt; DeletionGracePeriodSeconds:&amp;lt;nil&amp;gt; Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ManagedFields:[]} Spec:{Volumes:[] InitContainers:[] Containers:[] EphemeralContainers:[] RestartPolicy: TerminationGracePeriodSeconds:&amp;lt;nil&amp;gt; ActiveDeadlineSeconds:&amp;lt;nil&amp;gt; DNSPolicy: NodeSelector:map[] ServiceAccountName: DeprecatedServiceAccount: AutomountServiceAccountToken:&amp;lt;nil&amp;gt; NodeName: HostNetwork:false HostPID:false HostIPC:false ShareProcessNamespace:&amp;lt;nil&amp;gt; SecurityContext:nil ImagePullSecrets:[] Hostname: Subdomain: Affinity:nil SchedulerName: Tolerations:[] HostAliases:[] PriorityClassName: Priority:&amp;lt;nil&amp;gt; DNSConfig:nil ReadinessGates:[] RuntimeClassName:&amp;lt;nil&amp;gt; EnableServiceLinks:&amp;lt;nil&amp;gt; PreemptionPolicy:&amp;lt;nil&amp;gt; Overhead:map[] TopologySpreadConstraints:[] SetHostnameAsFQDN:&amp;lt;nil&amp;gt; OS:nil HostUsers:&amp;lt;nil&amp;gt; SchedulingGates:[] ResourceClaims:[]} Status:{Phase: Conditions:[] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:&amp;lt;nil&amp;gt; InitContainerStatuses:[] ContainerStatuses:[] QOSClass: EphemeralContainerStatuses:[] Resize:}}
format.Object:
&amp;lt;v1.Pod&amp;gt;:
metadata:
creationTimestamp: null
spec:
containers: null
status: {}
&lt;/code>&lt;/pre>
&lt;h2 id="recovering-from-test-failures">Recovering from test failures&lt;/h2>
&lt;p>All tests should ensure that a cluster is restored to the state that it was in
before the test ran. &lt;a href="https://pkg.go.dev/github.com/onsi/ginkgo/v2#DeferCleanup">&lt;code>ginkgo.DeferCleanup&lt;/code>
&lt;/a> is recommended for
this because it can be called similar to &lt;code>defer&lt;/code> directly after setting up
something. It is better than &lt;code>defer&lt;/code> because Ginkgo will show additional
details about which cleanup code is running and (if possible) handle timeouts
for that code (see next section). It is better than &lt;code>ginkgo.AfterEach&lt;/code> because
it is not necessary to define additional variables and because
&lt;code>ginkgo.DeferCleanup&lt;/code> executes code in the more useful last-in-first-out order,
i.e. things that get set up first get removed last.&lt;/p>
&lt;p>Objects created in the test namespace do not need to be deleted because
deleting the namespace will also delete them. However, if deleting an object
may fail, then explicitly cleaning it up is better because then failures or
timeouts related to it will be more obvious.&lt;/p>
&lt;p>In cases where the test may have removed the object, &lt;code>framework.IgnoreNotFound&lt;/code>
can be used to ignore the &amp;ldquo;not found&amp;rdquo; error:&lt;/p>
&lt;pre tabindex="0">&lt;code>podClient := f.ClientSet.CoreV1().Pods(f.Namespace.Name)
pod, err := podClient.Create(ctx, testPod, metav1.CreateOptions{})
framework.ExpectNoError(err, &amp;#34;create test pod&amp;#34;)
ginkgo.DeferCleanup(framework.IgnoreNotFound(podClient.Delete), pod.Name, metav1.DeleteOptions{})
&lt;/code>&lt;/pre>&lt;h2 id="interrupting-tests">Interrupting tests&lt;/h2>
&lt;p>When aborting a manual &lt;code>gingko ./test/e2e&lt;/code> invocation with CTRL-C or a signal,
the currently running test(s) should stop immediately. This gets achieved by
accepting a &lt;code>ctx context.Context&lt;/code> as first parameter in the Ginkgo callback
function and then passing that context through to all code that might
block. When Ginkgo notices that it needs to shut down, it will cancel that
context and all code trying to use it will immediately return with a &lt;code>context canceled&lt;/code> error. Cleanup callbacks get a new context which will time out
eventually to ensure that tests don&amp;rsquo;t get stuck. For a detailed description,
see &lt;a href="https://onsi.github.io/ginkgo/#interrupting-aborting-and-timing-out-suites">https://onsi.github.io/ginkgo/#interrupting-aborting-and-timing-out-suites&lt;/a>.
Most of the E2E tests &lt;a href="https://github.com/kubernetes/kubernetes/pull/112923">were update to use the Ginkgo
context&lt;/a> at the start of
the 1.27 development cycle.&lt;/p>
&lt;p>There are some gotchas:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Don&amp;rsquo;t use the &lt;code>ctx&lt;/code> passed into &lt;code>ginkgo.It&lt;/code> in a &lt;code>ginkgo.DeferCleanup&lt;/code>
callback because the context will be canceled when the cleanup code
runs. This is wrong:&lt;/p>
&lt;pre>&lt;code> ginkgo.It(&amp;quot;something&amp;quot;, func(ctx context.Context) {
...
ginkgo.DeferCleanup(func() {
// do something with ctx
})
})
&lt;/code>&lt;/pre>
&lt;p>Instead, register a function which accepts a new context:&lt;/p>
&lt;pre>&lt;code> ginkgo.DeferCleanup(func(ctx context.Context) {
// do something with the new ctx
})
&lt;/code>&lt;/pre>
&lt;p>Anonymous functions can be avoided by passing some existing function and its
parameters directly to &lt;code>ginkgo.DeferCleanup&lt;/code>. Again, beware to &lt;em>not&lt;/em> pass the
wrong &lt;code>ctx&lt;/code>. This is wrong:&lt;/p>
&lt;pre>&lt;code> ginkgo.It(&amp;quot;something&amp;quot;, func(ctx context.Context) {
...
ginkgo.DeferCleanup(myDeleteFunc, ctx, objName)
})
&lt;/code>&lt;/pre>
&lt;p>Instead, just pass the other parameters and let &lt;code>ginkgo.DeferCleanup&lt;/code>
add a new context:&lt;/p>
&lt;pre>&lt;code> ginkgo.DeferCleanup(myDeleteFunc, objName)
&lt;/code>&lt;/pre>
&lt;/li>
&lt;li>
&lt;p>When starting some background goroutine in a &lt;code>ginkgo.BeforeEach&lt;/code> callback,
use &lt;code>context.WithCancel(context.Background())&lt;/code>. The context passed into the
callback will get canceled when the callback returns, which would cause the
background goroutine to stop before the test runs. This works:&lt;/p>
&lt;pre>&lt;code> backgroundCtx, cancel := context.WithCancel(context.Background())
ginkgo.DeferCleanup(cancel)
_, controller = cache.NewInformer( ... )
go controller.Run(backgroundCtx.Done())
&lt;/code>&lt;/pre>
&lt;/li>
&lt;li>
&lt;p>When adding a timeout to the context for one particular operation,
beware of overwriting the &lt;code>ctx&lt;/code> variable. This code here applies
the timeout to the next call and everything else that follows:&lt;/p>
&lt;pre>&lt;code> ctx, cancel := context.WithTimeout(ctx, 5 * time.Second)
defer cancel()
someOperation(ctx)
...
anotherOperation(ctx)
&lt;/code>&lt;/pre>
&lt;p>Better use some other variable name:&lt;/p>
&lt;pre>&lt;code> timeoutCtx, cancel := context.WithTimeout(ctx, 5 * time.Second)
defer cancel()
someOperation(timeoutCtx)
&lt;/code>&lt;/pre>
&lt;p>When the intention is to set a timeout for the entire callback, use
&lt;a href="https://pkg.go.dev/github.com/onsi/ginkgo/v2#NodeTimeout">&lt;code>ginkgo.NodeTimeout&lt;/code>&lt;/a>:&lt;/p>
&lt;pre>&lt;code> ginkgo.It(&amp;quot;something&amp;quot;, ginkgo.NodeTimeout(30 * time.Second), func(ctx context.Context) {
})
&lt;/code>&lt;/pre>
&lt;p>There is also a &lt;code>ginkgo.SpecTimeout&lt;/code>, but that then also includes the time
taken for &lt;code>BeforeEach&lt;/code>, &lt;code>AfterEach&lt;/code> and &lt;code>DeferCleanup&lt;/code> callbacks.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;h2 id="polling-and-timeouts">Polling and timeouts&lt;/h2>
&lt;p>When waiting for something to happen, use a reasonable timeout. Without it, a
test might keep running until the entire test suite gets killed by the
CI. Beware that the CI under load may take a lot longer to complete some
operation compared to running the same test locally. On the other hand, a too
long timeout also has drawbacks:&lt;/p>
&lt;ul>
&lt;li>When a feature is broken so that the expected state doesn&amp;rsquo;t get reached, a test
waiting for that state first needs to time out before the test fails.&lt;/li>
&lt;li>If a state is expected to be reached within a certain time frame, then a
timeout that is much higher will cause test runs to be considered successful
although the feature was too slow. A dedicated performance test in a well-know
environment may be a better solution for testing such performance expectations.&lt;/li>
&lt;/ul>
&lt;p>The framework provides some &lt;a href="https://github.com/kubernetes/kubernetes/blob/eba98af1d8b19b120e39f3/test/e2e/framework/timeouts.go#L44-L109">common
timeouts&lt;/a>
through the &lt;a href="https://github.com/kubernetes/kubernetes/blob/1e84987baccbccf929eba98af1d8b19b120e39f3/test/e2e/framework/framework.go#L122-L123">framework
instance&lt;/a>.
When writing a test, check whether one of those fits before defining a custom
timeout in the test.&lt;/p>
&lt;p>Good code that waits for something to happen meets the following criteria:&lt;/p>
&lt;ul>
&lt;li>accepts a context for test timeouts&lt;/li>
&lt;li>depending on how the test suite was invoked:
&lt;ul>
&lt;li>informative during interactive use (i.e. intermediate reports, either
periodically or on demand)&lt;/li>
&lt;li>little to no output during a CI run except when it fails&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>full explanation when it fails: when it observes some state and then
encounters errors reading the state, then dumping both the latest
observed state and the latest error is useful&lt;/li>
&lt;li>extension mechanism for writing custom checks&lt;/li>
&lt;li>early abort when condition cannot be reached anymore&lt;/li>
&lt;/ul>
&lt;p>&lt;a href="https://pkg.go.dev/github.com/onsi/gomega#Eventually">&lt;code>gomega.Eventually&lt;/code>&lt;/a>
satisfies all of these criteria and therefore is recommended, but not required.
In &lt;a href="https://github.com/kubernetes/kubernetes/pull/113298">https://github.com/kubernetes/kubernetes/pull/113298&lt;/a>,
&lt;a href="https://github.com/kubernetes/kubernetes/blob/222f65506252354da012c2e9d5457a6944a4e681/test/e2e/framework/pod/wait.go">test/e2e/framework/pods/wait.go&lt;/a>
and the framework were modified to use gomega. Typically, &lt;code>Eventually&lt;/code> is
passed a function which gets an object or lists several of them, then &lt;code>Should&lt;/code>
checks against the expected result. Errors and retries specific to Kubernetes
are handled by &lt;a href="https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/get.go">wrapping client-go
functions&lt;/a>.&lt;/p>
&lt;p>Using normal Gomega assertions in helper packages is problematic for two reasons:&lt;/p>
&lt;ul>
&lt;li>The stacktrace associated with the failure starts with the helper unless
extra care is take to pass in a stack offset.&lt;/li>
&lt;li>Additional explanations for a potential failure must be prepared beforehand
and passed in.&lt;/li>
&lt;/ul>
&lt;p>The E2E framework therefore uses a different approach:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://github.com/kubernetes/kubernetes/blob/222f65506252354da012c2e9d5457a6944a4e681/test/e2e/framework/expect.go#L80-L101">&lt;code>framework.Gomega()&lt;/code>&lt;/a>
offers similar functions as the &lt;code>gomega&lt;/code> package, except that they return a
normal error instead of failing the test.&lt;/li>
&lt;li>That error gets wrapped with &lt;code>fmt.Errorf(&amp;quot;&amp;lt;explanation&amp;gt;: %w)&lt;/code> to
add additional information, just as in normal Go code.&lt;/li>
&lt;li>Wrapping the error (&lt;code>%w&lt;/code> instead of &lt;code>%v&lt;/code>) is important because then
&lt;code>framework.ExpectNoError&lt;/code> directly uses the error message as failure without
additional boiler plate text. It also is able to log the stacktrace where
the error occurred and not just where it was finally treated as a test
failure.&lt;/li>
&lt;/ul>
&lt;h2 id="tips-for-writing-and-debugging-long-running-tests">Tips for writing and debugging long-running tests&lt;/h2>
&lt;ul>
&lt;li>
&lt;p>Use &lt;code>ginkgo.By&lt;/code> to record individual steps. Ginkgo will use that information
when describing where a test timed out.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Invoke the &lt;code>ginkgo&lt;/code> CLI with &lt;code>--poll-progress-after=30s&lt;/code> or some other
suitable duration to &lt;a href="https://onsi.github.io/ginkgo/#getting-visibility-into-long-running-specs">be informed
early&lt;/a>
why a test doesn&amp;rsquo;t complete and where it is stuck. A SIGINFO or SIGUSR1
signal can be sent to the CLI and/or e2e.test processes to trigger an
immediate progress report.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Use &lt;a href="https://pkg.go.dev/github.com/onsi/gomega#Eventually">&lt;code>gomega.Eventually&lt;/code>&lt;/a>
to wait for some condition. When it times out or
gets stuck, the last failed assertion will be included in the report
automatically. A good way to invoke it is:&lt;/p>
&lt;pre>&lt;code> gomega.Eventually(ctx, func(ctx context.Context) (book Book, err error) {
// Retrieve book from API server and return it.
...
}).WithPolling(5 * time.Second).WithTimeout(30 * time.Second).
Should(gomega.HaveField(&amp;quot;Author.DOB.Year()&amp;quot;, BeNumerically(&amp;quot;&amp;lt;&amp;quot;, 1900)))
&lt;/code>&lt;/pre>
&lt;p>Avoid testing for some condition inside the callback and returning a boolean
because then failure messages are not informative (see above). See
&lt;a href="https://github.com/kubernetes/kubernetes/pull/114640">https://github.com/kubernetes/kubernetes/pull/114640&lt;/a> for an example where
&lt;a href="https://pkg.go.dev/github.com/onsi/gomega@v1.27.2/gcustom">gomega/gcustom&lt;/a>
was used to write assertions.&lt;/p>
&lt;p>Some of the E2E framework sub-packages have helper functions that wait for
certain domain-specific conditions. Currently most of these functions don&amp;rsquo;t
follow best practices (not using gomega.Eventually, error messages not very
informative). &lt;a href="https://github.com/kubernetes/kubernetes/issues/106575">Work is
planned&lt;/a> in that
area, so beware that these APIs may
change at some point.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Use &lt;code>gomega.Consistently&lt;/code> to ensure that some condition is true
for a while. As with &lt;code>gomega.Eventually&lt;/code>, make assertions about the
value instead of checking the value with Go code and then asserting
that the code returns true.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Both &lt;code>gomega.Consistently&lt;/code> and &lt;code>gomega.Eventually&lt;/code> can be aborted early via
&lt;a href="https://onsi.github.io/gomega/#bailing-out-early---polling-functions">&lt;code>gomega.StopPolling&lt;/code>&lt;/a>.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Avoid polling with functions that don&amp;rsquo;t take a context (&lt;code>wait.Poll&lt;/code>,
&lt;code>wait.PollImmediate&lt;/code>, &lt;code>wait.Until&lt;/code>, &amp;hellip;) and replace with their counterparts
that do (&lt;code>wait.PollWithContext&lt;/code>, &lt;code>wait.PollImmediateWithContext&lt;/code>,
&lt;code>wait.UntilWithContext&lt;/code>, &amp;hellip;) or even better, with &lt;code>gomega.Eventually&lt;/code>.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;h2 id="next-steps">Next steps&lt;/h2>
&lt;p>Using &lt;code>wait.Poll&lt;/code> in E2E tests can be detected with
&lt;a href="https://github.com/ashanbrown/forbidigo">forbidigo&lt;/a> since &lt;a href="https://github.com/ashanbrown/forbidigo/pull/21">import alias
support&lt;/a> was merged. In
Kubernetes, that can be enabled in a golangci-lint invocation as soon as a
&lt;a href="https://github.com/golangci/golangci-lint/pull/3612">configuration extension&lt;/a>
is merged. Another
&lt;a href="https://github.com/golangci/golangci-lint/pull/3617">enhancement&lt;/a> would be
useful, but not absolutely required.&lt;/p>
&lt;p>Because a lot of existing code wouldn&amp;rsquo;t pass such a check, it probably will
only be enabled in the &lt;a href="https://groups.google.com/a/kubernetes.io/g/dev/c/myGiml72IbM/m/BhQqP4_OAwAJ">new stricter pull request
linting&lt;/a>
initially. Converting individual sub packages similar to
&lt;a href="https://github.com/kubernetes/kubernetes/pull/115548">&lt;code>test/e2e/framework/pod&lt;/code>&lt;/a>
to match current best practices would be a good way for new contributors to get
involved.&lt;/p>
&lt;p>The &lt;a href="https://github.com/kubernetes/community/blob/master/sig-testing/README.md">SIG
Testing&lt;/a>&amp;rsquo;s
Slack channel is a good place to start. At KubeCon EU 2023, the &lt;a href="https://kccnceu2023.sched.com/event/1Hzcr/keeping-the-lights-on-and-the-bugs-away-patrick-ohly-intel">&amp;ldquo;Keeping the
lights on and the bugs away&amp;rdquo;
talk&lt;/a>
will cover some of the material of this blog post. Catch me there or meet me at
the Intel booth to discuss this further!&lt;/p></description></item><item><title>Blog: From Zero to Kubernets Subproject Lead</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/03/29/from-zero-to-k8s-subproject-lead/</link><pubDate>Wed, 29 Mar 2023 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/03/29/from-zero-to-k8s-subproject-lead/</guid><description>
&lt;p>Getting started in any open-source community can be daunting, especially if it’s a big one like
Kubernetes. I wrote this post to share my experience and encourage others to join up. All
it takes is some curiosity and a willingness to show up!&lt;/p>
&lt;p>Here’s how my journey unfolded at a high level:&lt;/p>
&lt;ol>
&lt;li>What am I interested in? Is there a SIG (Special Interest Group) or a WG (Working Group) that is
dedicated to that topic, or something similar? &lt;/li>
&lt;li>Sign up for their mailing list and start hopping on meetings.&lt;/li>
&lt;li>When (never if!) there are opportunities to help out and it aligns with your skills and desired
growth areas, raise your hand.&lt;/li>
&lt;li>Ask for lots of help and don’t be shy about not knowing everything (or anything!)&lt;/li>
&lt;li>Keep plugging along, even if progress isn’t as fast as you would like it to be.&lt;/li>
&lt;/ol>
&lt;h2 id="starting-up">Starting up&lt;/h2>
&lt;p>First things first. What are you interested in learning more about? There are so many wonderful SIGs
and working groups in the Kubernetes community: there’s something for everyone. And continuing to
show up and participate will be so much easier if you think what you are doing is
interesting. Likewise, continued participation is what keeps the community thriving, so that
interest will drive you to have more of an impact. &lt;/p>
&lt;p>Also: it’s ok to show up knowing nothing! I remember showing up knowing
very little about Kubernetes or how the community itself worked. And while I know more about how the
community functions today, I am still learning all the time about it and the project. Fortunately,
the community is full of friendly people who want to help you learn. Learning as you go is expected
and celebrated. When you raise your hand to do something, even if you know nothing, people will
cheer and help you along the way. &lt;/p>
&lt;p>This method was my exact story. It was my first or second meeting with &lt;a href="https://github.com/kubernetes/community/tree/master/sig-security">SIG
Security&lt;/a>, and &lt;a href="https://github.com/PushkarJ">Pushkar
Joglekar&lt;/a> mentioned that he needed a lead for a subproject he was
creating after having done a security assessment of &lt;a href="https://cluster-api.sigs.k8s.io/">Cluster API&lt;/a>.
Everyone was so friendly in the meeting
that I thought, &amp;ldquo;Hey, why not try it out?&amp;rdquo; And since then, I have received so much support and
encouragement from my co-leads who are delighted to have me, &lt;em>especially&lt;/em> because I am a beginner;
new participation is what keeps the community healthy.&lt;/p>
&lt;h2 id="always-learning">Always learning&lt;/h2>
&lt;p>My participation has also been a great learning experience on several fronts. First, I have been
exposed to techniques for how to build community consensus. It’s simple stuff: show up at other SIG
or working group meetings, share your ideas or where you are looking for help, find people who are
interested and have the knowledge to help, build an action plan together, do it, and share as you
execute. But the other thing that I’m learning is that building this consensus and executing it in a
transparent, inviting way simply takes time. &lt;/p>
&lt;p>I also have to be patient with myself and remember that I am learning as I go. The &lt;a href="https://github.com/kubernetes/kubernetes">Kubernetes git
repo&lt;/a> can be daunting to navigate. Knowing the next best
step isn’t always obvious. But this is where my third learning curve, how to engage the community
to get what I need, comes into play. It turns out that asking questions in the &lt;a href="https://slack.k8s.io/">Kubernetes Slack
workspace&lt;/a> and bringing my topics to the SIG Security meetings when I need
help is an amazing way to get what I need! Again, simple stuff, but until you do it, it’s not always
obvious.&lt;/p>
&lt;h2 id="why-you---a-beginner---are-important-to-the-project">Why you - a beginner - are important to the project&lt;/h2>
&lt;p>In many ways, beginners are the most important part of the community. To put a finer point on it:
asking for, receiving, and then giving help is a very relevant part of how the community grows and
flourishes. When we take on and then pass on knowledge, we ensure that the community grows enough to
keep supporting the needs of the people who rely on the project, whatever it is. You have
superpowers as a beginner! &lt;/p>
&lt;p>I hope people who read this post have their curiosity peaked about getting involved in the
community. It may seem scary. My experience has been such that, about halfway through your first
step, you realize there are loads of people here who want to help you learn and are excited for you
expressing interest and trying to participate, and the fear melts away. Sure, I’m still uncertain
about a few things, but I know the community has my back and will support my growth. &lt;/p>
&lt;p>Come on in, that water’s fine!&lt;/p></description></item><item><title>Blog: Introducing KWOK: Kubernetes WithOut Kubelet</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/03/01/introducing-kwok/</link><pubDate>Wed, 01 Mar 2023 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/03/01/introducing-kwok/</guid><description>
&lt;p>&lt;strong>Author:&lt;/strong> Shiming Zhang (DaoCloud), Wei Huang (Apple), Yibo Zhuang (Apple)&lt;/p>
&lt;img style="float: right; display: inline-block; margin-left: 2em; max-width: 15em;" src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/03/01/introducing-kwok/kwok.svg" alt="KWOK logo" />
&lt;p>Have you ever wondered how to set up a cluster of thousands of nodes just in seconds, how to simulate real nodes with a low resource footprint, and how to test your Kubernetes controller at scale without spending much on infrastructure?&lt;/p>
&lt;p>If you answered &amp;ldquo;yes&amp;rdquo; to any of these questions, then you might be interested in KWOK, a toolkit that enables you to create a cluster of thousands of nodes in seconds.&lt;/p>
&lt;h2 id="what-is-kwok">What is KWOK?&lt;/h2>
&lt;p>KWOK stands for Kubernetes WithOut Kubelet. So far, it provides two tools:&lt;/p>
&lt;dl>
&lt;dt>&lt;code>kwok&lt;/code>&lt;/dt>
&lt;dd>&lt;code>kwok&lt;/code> is the cornerstone of this project, responsible for simulating the lifecycle of fake nodes, pods, and other Kubernetes API resources.&lt;/dd>
&lt;dt>&lt;code>kwokctl&lt;/code>&lt;/dt>
&lt;dd>&lt;code>kwokctl&lt;/code> is a CLI tool designed to streamline the creation and management of clusters, with nodes simulated by &lt;code>kwok&lt;/code>.&lt;/dd>
&lt;/dl>
&lt;h2 id="why-use-kwok">Why use KWOK?&lt;/h2>
&lt;p>KWOK has several advantages:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Speed&lt;/strong>: You can create and delete clusters and nodes almost instantly, without waiting for boot or provisioning.&lt;/li>
&lt;li>&lt;strong>Compatibility&lt;/strong>: KWOK works with any tools or clients that are compliant with Kubernetes APIs, such as kubectl, helm, kui, etc.&lt;/li>
&lt;li>&lt;strong>Portability&lt;/strong>: KWOK has no specific hardware or software requirements. You can run it using pre-built images, once Docker or Nerdctl is installed. Alternatively, binaries are also available for all platforms and can be easily installed.&lt;/li>
&lt;li>&lt;strong>Flexibility&lt;/strong>: You can configure different node types, labels, taints, capacities, conditions, etc., and you can configure different pod behaviors, status, etc. to test different scenarios and edge cases.&lt;/li>
&lt;li>&lt;strong>Performance&lt;/strong>: You can simulate thousands of nodes on your laptop without significant consumption of CPU or memory resources.&lt;/li>
&lt;/ul>
&lt;h2 id="what-are-the-use-cases">What are the use cases?&lt;/h2>
&lt;p>KWOK can be used for various purposes:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Learning&lt;/strong>: You can use KWOK to learn about Kubernetes concepts and features without worrying about resource waste or other consequences.&lt;/li>
&lt;li>&lt;strong>Development&lt;/strong>: You can use KWOK to develop new features or tools for Kubernetes without accessing to a real cluster or requiring other components.&lt;/li>
&lt;li>&lt;strong>Testing&lt;/strong>:
&lt;ul>
&lt;li>You can measure how well your application or controller scales with different numbers of nodes and(or) pods.&lt;/li>
&lt;li>You can generate high loads on your cluster by creating many pods or services with different resource requests or limits.&lt;/li>
&lt;li>You can simulate node failures or network partitions by changing node conditions or randomly deleting nodes.&lt;/li>
&lt;li>You can test how your controller interacts with other components or features of Kubernetes by enabling different feature gates or API versions.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ul>
&lt;h2 id="what-are-the-limitations">What are the limitations?&lt;/h2>
&lt;p>KWOK is not intended to replace others completely. It has some limitations that you should be aware of:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Functionality&lt;/strong>: KWOK is not a kubelet and may exhibit different behaviors in areas such as pod lifecycle management, volume mounting, and device plugins. Its primary function is to simulate updates of node and pod status.&lt;/li>
&lt;li>&lt;strong>Accuracy&lt;/strong>: It&amp;rsquo;s important to note that KWOK doesn&amp;rsquo;t accurately reflect the performance or behavior of real nodes under various workloads or environments. Instead, it approximates some behaviors using simple formulas.&lt;/li>
&lt;li>&lt;strong>Security&lt;/strong>: KWOK does not enforce any security policies or mechanisms on simulated nodes. It assumes that all requests from the kube-apiserver are authorized and valid.&lt;/li>
&lt;/ul>
&lt;h2 id="getting-started">Getting started&lt;/h2>
&lt;p>If you are interested in trying out KWOK, please check its &lt;a href="https://kwok.sigs.k8s.io/">documents&lt;/a> for more details.&lt;/p>
&lt;figure>
&lt;img src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/03/01/introducing-kwok/manage-clusters.svg"
alt="Animation of a terminal showing kwokctl in use"/> &lt;figcaption>
&lt;p>Using kwokctl to manage simulated clusters&lt;/p>
&lt;/figcaption>
&lt;/figure>
&lt;h2 id="getting-involved">Getting Involved&lt;/h2>
&lt;p>If you&amp;rsquo;re interested in participating in future discussions or development related to KWOK, there are several ways to get involved:&lt;/p>
&lt;ul>
&lt;li>Slack: &lt;a href="https://kubernetes.slack.com/messages/kwok/">#kwok&lt;/a> for general usage discussion, &lt;a href="https://kubernetes.slack.com/messages/kwok-dev/">#kwok-dev&lt;/a> for development discussion. (visit &lt;a href="https://slack.k8s.io/">slack.k8s.io&lt;/a> for a workspace invitation)&lt;/li>
&lt;li>Open Issues/PRs/Discussions in &lt;a href="https://sigs.k8s.io/kwok/">sigs.k8s.io/kwok&lt;/a>&lt;/li>
&lt;/ul>
&lt;p>We welcome feedback and contributions from anyone who wants to join us in this exciting project.&lt;/p></description></item><item><title>Blog: Spotlight on SIG Instrumentation</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/02/03/sig-instrumentation-spotlight-2023/</link><pubDate>Fri, 03 Feb 2023 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2023/02/03/sig-instrumentation-spotlight-2023/</guid><description>
&lt;p>Observability requires the right data at the right time for the right consumer (human or piece of software) to make the right decision. In the context of Kubernetes, having best practices for cluster observability across all Kubernetes components is crucial.&lt;/p>
&lt;p>SIG Instrumentation helps to address this issue by providing best practices and tools that all other SIGs use to instrument Kubernetes components-like the &lt;em>API server&lt;/em>, &lt;em>scheduler&lt;/em>, &lt;em>kubelet&lt;/em> and &lt;em>kube-controller-manager&lt;/em>.&lt;/p>
&lt;p>In this SIG Instrumentation spotlight, &lt;a href="https://www.linkedin.com/in/imrannoormohamed/">Imran Noor Mohamed&lt;/a>, SIG ContribEx-Comms tech lead talked with &lt;a href="https://twitter.com/ehashdn">Elana Hashman&lt;/a>, and &lt;a href="https://www.linkedin.com/in/hankang">Han Kang&lt;/a>, chairs of SIG Instrumentation, on how the SIG is organized, what are the current challenges and how anyone can get involved and contribute.&lt;/p>
&lt;h2 id="about-sig-instrumentation">About SIG Instrumentation&lt;/h2>
&lt;p>&lt;strong>Imran (INM)&lt;/strong>: Hello, thank you for the opportunity of learning more about SIG Instrumentation. Could you tell us a bit about yourself, your role, and how you got involved in SIG Instrumentation?&lt;/p>
&lt;p>&lt;strong>Han (HK)&lt;/strong>: I started in SIG Instrumentation in 2018, and became a chair in 2020. I primarily got involved with SIG instrumentation due to a number of upstream issues with metrics which ended up affecting GKE in bad ways. As a result, we ended up launching an initiative to stabilize our metrics and make metrics a proper API.&lt;/p>
&lt;p>&lt;strong>Elana (EH)&lt;/strong>: I also joined SIG Instrumentation in 2018 and became a chair at the same time as Han. I was working as a site reliability engineer (SRE) on bare metal Kubernetes clusters and was working to build out our observability stack. I encountered some issues with label joins where Kubernetes metrics didn’t match kube-state-metrics (&lt;a href="https://github.com/kubernetes/kube-state-metrics">KSM&lt;/a>) and started participating in SIG meetings to improve things. I helped test performance improvements to kube-state-metrics and ultimately coauthored a KEP for overhauling metrics in the 1.14 release to improve usability.&lt;/p>
&lt;p>&lt;strong>Imran (INM)&lt;/strong>: Interesting! Does that mean SIG Instrumentation involves a lot of plumbing?&lt;/p>
&lt;p>&lt;strong>Han (HK)&lt;/strong>: I wouldn’t say it involves a ton of plumbing, though it does touch basically every code base. We have our own dedicated directories for our metrics, logs, and tracing frameworks which we tend to work out of primarily. We do have to interact with other SIGs in order to propagate our changes which makes us more of a horizontal SIG.&lt;/p>
&lt;p>&lt;strong>Imran (INM)&lt;/strong>: Speaking about interaction and coordination with other SIG could you describe how the SIGs is organized?&lt;/p>
&lt;p>&lt;strong>Elana (EH)&lt;/strong>: In SIG Instrumentation, we have two chairs, Han and myself, as well as two tech leads, David Ashpole and Damien Grisonnet. We all work together as the SIG’s leads in order to run meetings, triage issues and PRs, review and approve KEPs, plan for each release, present at KubeCon and community meetings, and write our annual report. Within the SIG we also have a number of important subprojects, each of which is stewarded by its subproject owners. For example, Marek Siarkowicz is a subproject owner of &lt;a href="https://github.com/kubernetes-sigs/metrics-server">metrics-server&lt;/a>.&lt;/p>
&lt;p>Because we’re a horizontal SIG, some of our projects have a wide scope and require coordination from a dedicated group of contributors. For example, in order to guide the Kubernetes migration to structured logging, we chartered the &lt;a href="https://github.com/kubernetes/community/blob/master/wg-structured-logging/README.md">Structured Logging&lt;/a> Working Group (WG), organized by Marek and Patrick Ohly. The WG doesn’t own any code, but helps with various components such as the &lt;em>kubelet&lt;/em>, &lt;em>scheduler&lt;/em>, etc. in migrating their code to use structured logs.&lt;/p>
&lt;p>&lt;strong>Imran (INM)&lt;/strong>: Walking through the &lt;a href="https://github.com/kubernetes/community/blob/master/sig-instrumentation/charter.md">charter&lt;/a> alone it’s clear that SIG Instrumentation has a lot of sub-projects. Could you highlight some important ones?&lt;/p>
&lt;p>&lt;strong>Han (HK)&lt;/strong>: We have many different sub-projects and we are in dire need of people who can come and help shepherd them. Our most important projects in-tree (that is, within the kubernetes/kubernetes repo) are metrics, tracing, and, structured logging. Our most important projects out-of-tree are (a) KSM (kube-state-metrics) and (b) metrics-server.&lt;/p>
&lt;p>&lt;strong>Elana (EH)&lt;/strong>: Echoing this, we would love to bring on more maintainers for kube-state-metrics and metrics-server. Our friends at WG Structured Logging are also looking for contributors. Other subprojects include klog, prometheus-adapter, and a new subproject that we just launched for collecting high-fidelity, scalable utilization metrics called &lt;a href="https://github.com/kubernetes-sigs/usage-metrics-collector">usage-metrics-collector&lt;/a>. All are seeking new contributors!&lt;/p>
&lt;h2 id="current-status-and-ongoing-challenges">Current status and ongoing challenges&lt;/h2>
&lt;p>&lt;strong>Imran (INM)&lt;/strong>: For release &lt;a href="https://github.com/kubernetes/sig-release/tree/master/releases/release-1.26">1.26&lt;/a> we can see that there are a relevant number of metrics, logs, and tracing &lt;a href="https://www.k8s.dev/resources/keps/">KEPs&lt;/a> in the pipeline. Would you like to point out important things for last release (maybe alpha &amp;amp; stable milestone candidates?)&lt;/p>
&lt;p>&lt;strong>Han (HK)&lt;/strong>: We can now generate &lt;a href="https://kubernetes.io/docs/reference/instrumentation/metrics/">documentation&lt;/a> for every single metric in the main Kubernetes code base! We have a pretty fancy static analysis pipeline that enables this functionality. We’ve also added feature metrics so that you can look at your metrics to determine which features are enabled in your cluster at a given time. Lastly, we added a component-sli endpoint, which should make it easy for people to create availability SLOs for &lt;em>control-plane&lt;/em> components.&lt;/p>
&lt;p>&lt;strong>Elana (EH)&lt;/strong>: We’ve also been working on tracing KEPs for both the &lt;em>API server&lt;/em> and &lt;em>kubelet&lt;/em>, though neither graduated in 1.26. I’m also really excited about the work Han is doing with WG Reliability to extend and improve our metrics stability framework.&lt;/p>
&lt;p>&lt;strong>Imran (INM)&lt;/strong>: What do you think are the Kubernetes-specific challenges tackled by the SIG Instrumentation? What are the future efforts to solve them?&lt;/p>
&lt;p>&lt;strong>Han (HK)&lt;/strong>: SIG instrumentation suffered a bit in the past from being a horizontal SIG. We did not have an obvious location to put our code and did not have a good mechanism to audit metrics that people would randomly add. We’ve fixed this over the years and now we have dedicated spots for our code and a reliable mechanism for auditing new metrics. We also now offer stability guarantees for metrics. We hope to have full-blown tracing up and down the kubernetes stack, and metric support via exemplars.&lt;/p>
&lt;p>&lt;strong>Elana (EH)&lt;/strong>: I think SIG Instrumentation is a really interesting SIG because it poses different kinds of opportunities to get involved than in other SIGs. You don’t have to be a software developer to contribute to our SIG! All of our components and subprojects are focused on better understanding Kubernetes and its performance in production, which allowed me to get involved as one of the few SIG Chairs working as an SRE at that time. I like that we provide opportunities for newcomers to contribute through using, testing, and providing feedback on our subprojects, which is a lower barrier to entry. Because many of these projects are out-of-tree, I think one of our challenges is to figure out what’s in scope for core Kubernetes SIGs instrumentation subprojects, what’s missing, and then fill in the gaps.&lt;/p>
&lt;h2 id="community-and-contribution">Community and contribution&lt;/h2>
&lt;p>&lt;strong>Imran (INM)&lt;/strong>: Kubernetes values community over products. Any recommendation for anyone looking into getting involved in SIG Instrumentation work? Where should they start (new contributor-friendly areas within SIG?)&lt;/p>
&lt;p>&lt;strong>Han(HK) and Elana (EH)&lt;/strong>: Come to our bi-weekly triage &lt;a href="https://github.com/kubernetes/community/tree/master/sig-instrumentation#meetings">meetings&lt;/a>! They aren’t recorded and are a great place to ask questions and learn about our ongoing work. We strive to be a friendly community and one of the easiest SIGs to get started with. You can check out our latest KubeCon NA 2022 &lt;a href="https://youtu.be/JIzrlWtAA8Y">SIG Instrumentation Deep Dive&lt;/a> to get more insight into our work. We also invite you to join our Slack channel #sig-instrumentation and feel free to reach out to any of our SIG leads or subproject owners directly.&lt;/p>
&lt;p>Thank you so much for your time and insights into the workings of SIG Instrumentation!&lt;/p></description></item><item><title>Blog: Prow and Tide for Kubernetes Contributors</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/12/12/prow-and-tide-for-kubernetes-contributors/</link><pubDate>Mon, 12 Dec 2022 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/12/12/prow-and-tide-for-kubernetes-contributors/</guid><description>
&lt;p>&lt;strong>Authors:&lt;/strong> &lt;a href="https://github.com/chris-short">Chris Short&lt;/a>, &lt;a href="https://github.com/fsmunoz">Frederico Muñoz&lt;/a>&lt;/p>
&lt;hr>
&lt;p>In my work in the Kubernetes world, I look up a label or Prow command often. The systems behind the scenes
(&lt;a href="https://prow.kubernetes.io/">Prow&lt;/a> and
&lt;a href="https://pkg.go.dev/k8s.io/test-infra/prow/cmd/tide#section-readme">Tide&lt;/a>) are here to help Kubernetes
Contributors get stuff done.&lt;/p>
&lt;p>Labeling which SIG, WG, or subproject is as important as the issue or PR having someone assigned. To quote
&lt;a href="https://docs.prow.k8s.io/docs/components/core/tide/">the docs&lt;/a>, &amp;ldquo;Tide is a
&lt;a href="https://docs.prow.k8s.io/docs/">Prow&lt;/a> component for managing a pool of GitHub PRs that match a given set of
criteria. It will automatically retest PRs that meet the criteria (&amp;rsquo;tide comes in&amp;rsquo;) and automatically merge
them when they have up-to-date passing test results (&amp;rsquo;tide goes out&amp;rsquo;).&amp;rdquo;&lt;/p>
&lt;p>What actually prompted this article is the awesomely amazing folks on the &lt;a href="https://github.com/kubernetes/community/tree/master/communication">Contributor Comms
team&lt;/a> saying, &amp;ldquo;I need to squash my commits
and push that.&amp;rdquo; Which immediately made me remember the wonder of the Tide label:
&lt;a href="https://github.com/kubernetes/test-infra/blob/master/label_sync/labels.md#tide/merge-method-squash">&lt;code>tide/merge-method-squash&lt;/code>&lt;/a>.&lt;/p>
&lt;h2 id="why-is-this-helpful">Why is this helpful&lt;/h2>
&lt;p>Contributing to Kubernetes will, most of the time, involve some kind of git-based action, specifically on the
Kubernetes GitHub. This can be an obstacle to those less exposed to &lt;code>git&lt;/code> and/or GitHub, and is especially
noticeable when we&amp;rsquo;re dealing with non-code contributions (documentation, blog posts, etc.).&lt;/p>
&lt;p>When a contributor submits something, it will generally be through a &lt;a href="https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests">pull
request&lt;/a>. When
it comes to how the change will go from request to approval, there are a number of considerations that must be
made, such as:&lt;/p>
&lt;ul>
&lt;li>How should we request reviews?&lt;/li>
&lt;li>How do we assign the request to a specific SIG?&lt;/li>
&lt;li>How do we approve things while making it public and easily traceable?&lt;/li>
&lt;li>How to merge a contribution without carrying all the commit messages that were created during the review?&lt;/li>
&lt;/ul>
&lt;p>These are some of the main tasks in which Tide will help, allowing us to use the GitHub interface for these
tasks (and more), making the actions more visible to the community (since they are visible as plain comments
in the GitHub discussion), and allowing us to manage contributions without necessarily having to clone git
repositories or having to manually issue git commands.&lt;/p>
&lt;h2 id="back-to-squashing">Back to squashing&lt;/h2>
&lt;p>One of the most common examples, and getting back to my initial one, is squashing commits: if someone makes a
change in a PR, there will likely be reviews and changes, and each one of them will add a new commit
message. If left like this, the PR will add to the main branch all the commit messages created during the
review process, which will make the history of the main branch less readable: instead of a &lt;a href="https://www.kubernetes.dev/docs/guide/github-workflow/#squash-commits">single informative
description about a specific unit of
work&lt;/a>, it will contain multiple commit
messages that, out of the original context of the PR, will not be very helpful.&lt;/p>
&lt;p>To avoid this, we can &lt;em>squash&lt;/em> the commit messages to keep just one of them (usually, the first one):
the changes will still be visible for anyone reading the PR, but they will appear as a single commit to the
main branch.&lt;/p>
&lt;p>This can be done &lt;a href="https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History">through git&lt;/a>, and a cursory read
of how it&amp;rsquo;s done will not be obvious to someone relatively new to git! Is there a way to avoid cloning the
repository (or PR), issuing &lt;code>git&lt;/code> commands locally, and pushing the changes?&lt;/p>
&lt;p>There is, with Tide. Squashing your commits is a label away and the tooling will do the rest. To do this on
your PR you&amp;rsquo;ll need to comment with the following:&lt;/p>
&lt;p>&lt;code>/label tide/merge-method-squash&lt;/code>&lt;/p>
&lt;p>This will:&lt;/p>
&lt;ol>
&lt;li>Trigger Tide to squash the messages prior to merging.&lt;/li>
&lt;li>As a secondary effect, make your action clearly visible in the discussion section of the PR.&lt;/li>
&lt;/ol>
&lt;p>This use of Tide is one of the most useful when submitting changes that undergo changes during the PR
discussion, since it automates something to do The Right Thing (TM).&lt;/p>
&lt;p>There&amp;rsquo;s often nothing better than an example, so let&amp;rsquo;s take a look at &lt;a href="https://github.com/kubernetes/website/pull/32685">this proposed change to the Kubernetes
website&lt;/a>, on the topic of the &lt;code>dockershim&lt;/code> removal FAQ. The
initial commit is &lt;a href="https://github.com/kubernetes/website/pull/32685/commits">followed by several others&lt;/a>, a
result of the conversation and proposed reviews. The result of all those changes &lt;a href="https://github.com/kubernetes/website/commit/a582a21cf00c88446a7feda4effd853b108c5c9c">is
merged&lt;/a> as a single
commit, with the commit message retaining the title of the very first commit done, and the commit description
being the aggregate of all the commits done in the PR. This was achieved by using &lt;code>/label tide/merge-method-squash&lt;/code> &lt;a href="https://github.com/kubernetes/website/pull/32685#issuecomment-1085801034">in a
comment&lt;/a>, and did away with the need
to manually rebase and/or squash using &lt;code>git&lt;/code>: everything was possible through the GitHub interface.&lt;/p>
&lt;h2 id="assignment-review-approval">Assignment, review, approval.&lt;/h2>
&lt;p>Another area in which Prow and Tide are very useful is in dealing with assignments, reviews, and approvals.&lt;/p>
&lt;p>Starting with &lt;strong>assignment&lt;/strong>, the need to assign a PR to someone is very common. There are ways to do it
through the GitHub interface, but using Prow commands, as mentioned before, makes the actions more visible and
explicitly trigger the automation mechanism. Using &lt;code>/assign&lt;/code> in a comment will assign the PR to yourself, or
reassign it to someone else.&lt;/p>
&lt;p>Asking for &lt;strong>reviews&lt;/strong> is another very common task: with Prow it&amp;rsquo;s just a &lt;code>/cc @foo @bar @baz&lt;/code> away, and this
can be directly added in the initial PR description, or in any subsequent comment.&lt;/p>
&lt;p>&lt;strong>Approving&lt;/strong> a PR is one area in which making the process easily visible is very important, and,
unsurprisingly, we can use &lt;code>/lgtm&lt;/code> (&lt;em>looks good to me&lt;/em>) to publicly state our agreement, and at the same time
trigger the automated processes that will, hopefully, result in the merging of the contribution. Using
&lt;code>lgtm&lt;/code> adds (or, if using &lt;code>/lgtm cancel&lt;/code>, removes) the &lt;code>lgtm&lt;/code> label, while using &lt;code>/approve&lt;/code> will approve the
PR for merging (and can only be used by those with the necessary authorization).&lt;/p>
&lt;p>To summarize, &lt;code>/assign&lt;/code> makes it public that an assignment as been made (I use it often when I need to assign
an issue to myself), and by who; &lt;code>/lgtm&lt;/code> makes it clear from the comments that a review was made and,
automatically, adds the &lt;code>lgtm&lt;/code> label which is required for approval, and &lt;code>/approve&lt;/code> approves the PR for
merging.&lt;/p>
&lt;p>An &lt;a href="https://github.com/kubernetes/community/pull/6765">example of many of these is this update to the Kubernetes Community
site&lt;/a>: we can see how additional reviewers were added with
&lt;code>/cc&lt;/code>, and following the discussion and changes, both the &lt;code>/lgtm&lt;/code> and &lt;code>/approve&lt;/code> commands are used to trigger
the merging.&lt;/p>
&lt;p>More information on the review and approval cycle can be found &lt;a href="https://kubernetes.io/docs/contribute/review/for-approvers/">in the
documentation&lt;/a>, which also explains in more
detail when should certain commands be used, and by who.&lt;/p>
&lt;h2 id="more-about-prow-and-tide">More about Prow and Tide&lt;/h2>
&lt;p>The previous examples are some of the most commonly used, but Prow and Tide provide a lot more. These two
pages document all the functionality available to Kubernetes contributors (either through labels or Prow):&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://prow.kubernetes.io/command-help">Prow Command Help&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://github.com/kubernetes/test-infra/tree/master/label_sync">test-infra/label_sync/labels.md&lt;/a>&lt;/li>
&lt;/ul>
&lt;p>Labels are most commonly applied by using an associated command (e.g. &lt;code>/lgtm&lt;/code>, instead of &lt;code>/label lgtm&lt;/code>): only
when no such command exists are labels applied directly with the &lt;code>label&lt;/code> command; some of my more often used
commands and labels:&lt;/p>
&lt;ul>
&lt;li>&lt;code>/assign&lt;/code> (using it without adding a name assigns yourself)&lt;/li>
&lt;li>&lt;code>/honk&lt;/code>&lt;/li>
&lt;li>&lt;code>/(woof|bark|this-is-{fine|not-fine|unbearable})&lt;/code>&lt;/li>
&lt;li>&lt;code>/remove-lifecycle stale&lt;/code> (when issues aren&amp;rsquo;t touched for a period of time they&amp;rsquo;re marked stale)&lt;/li>
&lt;li>&lt;code>/shrug&lt;/code>&lt;/li>
&lt;li>&lt;code>/area contributor-comms&lt;/code> (You can use this to flag down the contributor communications team for reviews, comments on any issue, feedback, etc.)&lt;/li>
&lt;li>&lt;code>/label size/X&lt;/code> (Sizes are assigned automatically based on the number of lines changed in the PR)&lt;/li>
&lt;li>&lt;code>/hold&lt;/code> (This one is used for many things; if your PR is a work in progress, needs to be held to a certain date, etc.)&lt;/li>
&lt;li>&lt;code>/lgtm&lt;/code> (Adds or removes the &amp;rsquo;lgtm&amp;rsquo; label which is typically used to gate merging)&lt;/li>
&lt;li>&lt;code>/approve&lt;/code> (Approves a pull request; must be done by someone in the repo&amp;rsquo;s OWNERS file)&lt;/li>
&lt;/ul>
&lt;h2 id="more-advanced-usage">More advanced usage&lt;/h2>
&lt;p>What if you need a label that isn&amp;rsquo;t available on a certain GitHub repository? I&amp;rsquo;m glad you asked! This PR
demonstrates how to add labels to a repo:
&lt;a href="https://github.com/kubernetes/test-infra/pull/24315">https://github.com/kubernetes/test-infra/pull/24315&lt;/a>. You&amp;rsquo;ll
need to update the &lt;a href="https://github.com/kubernetes/test-infra/blob/master/label_sync/labels.yaml">labels.yaml
file&lt;/a> (the configuration) and the
&lt;a href="https://github.com/kubernetes/test-infra/blob/master/label_sync/labels.md">labels.md file&lt;/a> (documentation).&lt;/p>
&lt;p>This is why the &lt;a href="https://github.com/kubernetes/test-infra/blob/master/label_sync/labels.md#intro">label_sync&lt;/a>
tool, along with the logic Prow and Tide, simplify GitHub-based processes: they allow the automation of common
actions without necessarily having to leave the web-based GitHub interface. &lt;code>label_sync&lt;/code> ensures that labels
are applied uniformly across repositories.&lt;/p>
&lt;p>I&amp;rsquo;ve done this once in five years of contributing. But, it&amp;rsquo;s good to write it down as it&amp;rsquo;s something that
isn&amp;rsquo;t as trivial as you think because of the importance of the label_sync tooling.&lt;/p>
&lt;p>These are a handful of the &lt;a href="https://prow.kubernetes.io/command-help">commands&lt;/a> and
&lt;a href="https://github.com/kubernetes/test-infra/blob/master/label_sync/labels.md">labels&lt;/a> I enjoy. I&amp;rsquo;m sure there
are many others that are helpful to folks. With that in mind, see if there&amp;rsquo;s something you can benefit from in
these resources. They are there to make working on Kubernetes a better experience. If you think there&amp;rsquo;s some
functionality missing, I&amp;rsquo;d invite you to drop a Slack message in &lt;a href="https://kubernetes.slack.com/archives/C1TU9EB9S">SIG
ContribEx&lt;/a> or &lt;a href="https://kubernetes.slack.com/archives/C09QZ4DQB">SIG
Testing&lt;/a> to discuss.&lt;/p>
&lt;p>&lt;strong>Huge shoutout&lt;/strong>: To the folks that keep these systems humming along for the Kubernetes community. Couldn&amp;rsquo;t do it without y&amp;rsquo;all.&lt;/p></description></item><item><title>Blog: Implementing the Auto-refreshing Official Kubernetes CVE Feed</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/09/12/k8s-cve-feed-alpha/</link><pubDate>Mon, 12 Sep 2022 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/09/12/k8s-cve-feed-alpha/</guid><description>
&lt;p>&lt;strong>Author&lt;/strong>: Pushkar Joglekar (VMware)&lt;/p>
&lt;p>Accompanying the release of Kubernetes v1.25, we announced
&lt;a href="https://kubernetes.io/blog/2022/09/12/k8s-cve-feed-alpha/">availability of an official CVE feed&lt;/a>
as an &lt;code>alpha&lt;/code> feature. This blog will cover how we implemented this feature.&lt;/p>
&lt;h2 id="implementation-details">Implementation Details&lt;/h2>
&lt;p>An &lt;a href="https://kubernetes.io/docs/reference/issues-security/official-cve-feed/">auto-refreshing CVE feed&lt;/a>
allows users and implementers to programmatically fetch the list of CVEs
announced by the Kubernetes SRC (Security Response Committee).&lt;/p>
&lt;p>To ensure freshness and minimal maintainer overhead, the feed updates
automatically by fetching the CVE related information from the CVE announcement
GitHub Issues. Creating these issues is already part of the existing Security
Response Committee (SRC) workflow.&lt;/p>
&lt;h3 id="pre-requisites">Pre-requisites&lt;/h3>
&lt;p>Until December 2021, it was not possible to filter for issues or PRs that are
tied to CVEs announced by Kubernetes SRC. We added a new
label, &lt;code>official-cve-feed&lt;/code> to address that, and SIG-Security labelled relevant
issues with it. The in-scope issues are &lt;code>closed&lt;/code> issues for which there is a CVE
ID(s) and is officially announced as a Kubernetes security vulnerability by SRC.
You can now filter on all of these issues and find them
&lt;a href="https://github.com/kubernetes/kubernetes/issues?q=is%3Aissue+is%3Aclosed+label%3Aofficial-cve-feed+">here&lt;/a>
.&lt;/p>
&lt;p>For future security vulnerabilities, we added the label to the SRC playbook so
that all the future in-scope issues will automatically have this label.&lt;/p>
&lt;h3 id="building-on-existing-tooling">Building on existing tooling&lt;/h3>
&lt;p>For the next step, we created a &lt;code>prow&lt;/code> job in order to periodically query the
GitHub REST API and pull the relevant issues. The job runs every two hours and
pushes the CVE related information fetched from GitHub into a Google Cloud
Bucket.&lt;/p>
&lt;p>For every website build (at least twice a day), &lt;code>Netlify&lt;/code> data templates make a
call to this Google Cloud Bucket to pull the CVE information and then parses
into fields that are &lt;a href="https://www.jsonfeed.org/version/1.1/">JSON Feed v1.1&lt;/a>
compliant. The JSON file is available for programmatic consumption by automated
security tools. For humans, the JSON also gets transformed into a Markdown table
for easy viewing.&lt;/p>
&lt;h2 id="design-considerations">Design Considerations&lt;/h2>
&lt;p>Building trust and ensuring that the feed is not stale were our main priorities
when designing this feature for success and widespread adoption.&lt;/p>
&lt;h3 id="integrity-and-access-control-protections">Integrity and Access Control Protections&lt;/h3>
&lt;p>Changes to any of the four artifacts used to build this feed could lead to feed
tampering, broken JSON, and inconsistent or stale data.&lt;/p>
&lt;p>Let&amp;rsquo;s look at how access is controlled for them one by one:&lt;/p>
&lt;h4 id="github-issues-for-publicly-announced-cves">GitHub Issues for Publicly Announced CVEs&lt;/h4>
&lt;p>Adding the &lt;code>official-cve-feed&lt;/code> label is restricted to limited number of trusted
community members. Access to add this label is defined
in &lt;a href="https://github.com/kubernetes/test-infra/blob/master/config/prow/plugins.yaml#L149-L159">this configuration file&lt;/a>
. Any updates to this configuration file require the changes to go through the
existing &lt;a href="https://deploy-preview-670--kubernetes-contributor.netlify.app/docs/guide/pull-requests/">code review and approval process&lt;/a>&lt;/p>
&lt;h4 id="prow-configuration">Prow Configuration&lt;/h4>
&lt;p>The Prow job is defined in
a &lt;a href="https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes/sig-k8s-infra/trusted/sig-security-trusted.yaml#L94-L115">&lt;code>kubernetes/infra&lt;/code> configuration file&lt;/a>
The shell script to push and pull the data in Google Cloud Bucket is defined in
a
&lt;a href="https://github.com/kubernetes/sig-security/tree/main/sig-security-tooling/cve-feed/hack">&lt;code>kubernetes/sig-security&lt;/code> file&lt;/a>
under &lt;code>sig-security-tooling&lt;/code> sub-project. Both of these files go through the
same code review and approval process mentioned earlier.&lt;/p>
&lt;h4 id="google-cloud-bucket">Google Cloud Bucket&lt;/h4>
&lt;p>Write access to Google Cloud bucket is configured to be restricted to a set of
trusted community members managed via an
invite-only &lt;a href="https://github.com/kubernetes/k8s.io/blob/main/groups/sig-security/groups.yaml">Google Groups Membership&lt;/a>
under the &lt;code>kubernetes.io&lt;/code> domain.&lt;/p>
&lt;h4 id="website-data-templates">Website Data templates&lt;/h4>
&lt;p>Website data templates that fetch and parse the stored JSON blob are managed
under &lt;code>kubernetes/website&lt;/code> and have to follow the same code review and approval
process as mentioned earlier.&lt;/p>
&lt;h3 id="freshness-guarantees">Freshness Guarantees&lt;/h3>
&lt;p>The feed is updated everytime new CVE data is available by periodically
verifying if generated data is not the same as the stored data in the feed.&lt;/p>
&lt;p>The &lt;code>prow&lt;/code> job runs every two hours and compares the &lt;code>sha256&lt;/code> checksum of the
existing contents of the bucket with checksum of the latest JSON file generated
through GitHub issues. If the there is new data available, the hashes do not
match (typically because of a newly announced CVE) and the updated JSON file is
pushed onto the bucket replacing the old file and old hash checksum.&lt;/p>
&lt;p>If the hashes match, the &lt;code>write to bucket&lt;/code> operation is skipped to reduce
redundant updates to the cloud bucket. This also sets us up for more frequent
runs of the prow job if needed in the future.&lt;/p>
&lt;h2 id="whats-next">What&amp;rsquo;s Next?&lt;/h2>
&lt;p>If you would like to get involved in future iterations of this feed or other
security relevant work, please consider
joining &lt;a href="https://github.com/kubernetes/community/tree/master/sig-security#contact">Kubernetes SIG Security&lt;/a>
by joining our bi-weekly meetings or hanging out with us on our Slack Channel.&lt;/p>
&lt;p>&lt;em>A special shout out and massive thanks to Neha Lohia
&lt;a href="https://github.com/nehalohia27">(@nehalohia27)&lt;/a> and Tim
Bannister &lt;a href="https://github.com/sftim">(@sftim)&lt;/a> for their stellar collaboration
for many months from &amp;ldquo;ideation to implementation&amp;rdquo; of this feature.&lt;/em>&lt;/p></description></item><item><title>Blog: Enhancements Opt-in Process Change for v1.26</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/09/09/enhancements-opt-in/</link><pubDate>Fri, 09 Sep 2022 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/09/09/enhancements-opt-in/</guid><description>
&lt;p>&lt;strong>Author:&lt;/strong> Grace Nguyen&lt;/p>
&lt;h2 id="context-and-motivations">Context and Motivations&lt;/h2>
&lt;p>Since the inception of the Kubernetes release team, we have used a spreadsheet to keep track of enhancements for the release. The project has scaled massively in the past few years, with almost a hundred enhancements collected for the 1.24 release. This process has become error-prone and time consuming. A lot of manual work is required from the release team and the SIG leads to populate KEPs data in the sheet. We have received continuous feedback from our contributors to streamline the process.&lt;/p>
&lt;p>Starting with the 1.26 release, we are replacing the enhancements tracking spreadsheet with an automated &lt;a href="https://github.com/orgs/kubernetes/projects/98">GitHub project board&lt;/a>.&lt;/p>
&lt;h2 id="how-does-the-github-project-board-work">How does the Github Project Board work?&lt;/h2>
&lt;p>The board is populated with a script gathering all KEP issues in the &lt;code>kubernetes/enhancements&lt;/code> repo that have the label &lt;code>lead-opted-in&lt;/code>. The enhancements&amp;rsquo; stage and SIG information will also be automatically pulled from the KEP issue.&lt;/p>
&lt;p>After the KEP is populated on the Github Project Board, the Enhancements team will manually update the KEP with the label &lt;code>tracked/yes&lt;/code>, &lt;code>tracked/no&lt;/code> and on occasions, &lt;code>tracked/out-of-tree&lt;/code>. The &lt;code>tracked&lt;/code> label signifies qualification for the closest approaching milestone. For example, at the beginning of the release, &lt;code>tracked/yes&lt;/code> means that the KEP has satisfied all Enhancements Freeze requirements and similarly for Code Freeze, &lt;code>tracked/yes&lt;/code> means that all code related to the KEP has been merged. The &lt;code>tracked&lt;/code> label is reserved for the Enhancements team use only.&lt;/p>
&lt;h2 id="what-does-this-mean-for-the-community">What does this mean for the community?&lt;/h2>
&lt;p>If you are not a SIG lead, nothing will change beside the view of the enhancements collections and the change of platform. KEP authors will continue working with their respective SIG leads to opt in to the release.&lt;/p>
&lt;p>For SIG leads, opting in is simple. The KEP issue will be the single source of truth so ensure that all metadata is up to date. Simply comment &lt;code>/label lead-opted-in&lt;/code> on the enhancement tracking issue to opt it into the current release. That&amp;rsquo;s all you need to do to opt in! Since the script runs periodically, kindly come back to check that the KEP is on the board, labeled with &lt;code>tracked/yes&lt;/code>, and that there is an Enhancements team member assigned to it.&lt;/p>
&lt;p>We are excited to bring this highly requested feature into our release process and appreciate your patience. Email us at &lt;a href="mailto:release-enhancements-team@kubernetes.io">release-enhancements-team@kubernetes.io&lt;/a> or find us on Slack at &lt;a href="https://kubernetes.slack.com/archives/C02BY55KV7E">#release-enhancements&lt;/a> if you have any feedback, questions or concern.&lt;/p></description></item><item><title>Blog: Spotlight on SIG Storage</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/08/22/sig-storage-spotlight-2022/</link><pubDate>Mon, 22 Aug 2022 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/08/22/sig-storage-spotlight-2022/</guid><description>
&lt;p>Since the very beginning of Kubernetes, the topic of persistent data and how to address the requirement of stateful applications has been an important topic. Support for stateless deployments was natural, present from the start, and garnered attention, becoming very well-known. Work on better support for stateful applications was also present from early on, with each release increasing the scope of what could be run on Kubernetes.&lt;/p>
&lt;p>Message queues, databases, clustered filesystems: these are some examples of the solutions that have different storage requirements and that are, today, increasingly deployed in Kubernetes. Dealing with ephemeral and persistent storage, local or remote, file or block, from many different vendors, while considering how to provide the needed resiliency and data consistency that users expect, all of this is under SIG Storage&amp;rsquo;s umbrella.&lt;/p>
&lt;p>In this SIG Storage spotlight, &lt;a href="https://twitter.com/fredericomunoz">Frederico Muñoz&lt;/a> (Cloud &amp;amp; Architecture Lead at SAS) talked with &lt;a href="https://twitter.com/2000xyang">Xing Yang&lt;/a>, Tech Lead at VMware and co-chair of SIG Storage, on how the SIG is organized, what are the current challenges and how anyone can get involved and contribute.&lt;/p>
&lt;h2 id="about-sig-storage">About SIG Storage&lt;/h2>
&lt;p>&lt;strong>Frederico (FSM)&lt;/strong>: Hello, thank you for the opportunity of learning more about SIG Storage. Could you tell us a bit about yourself, your role, and how you got involved in SIG Storage.&lt;/p>
&lt;p>&lt;strong>Xing Yang (XY)&lt;/strong>: I am a Tech Lead at VMware, working on Cloud Native Storage. I am also a Co-Chair of SIG Storage. I started to get involved in K8s SIG Storage at the end of 2017, starting with contributing to the &lt;a href="https://kubernetes.io/docs/concepts/storage/volume-snapshots/">VolumeSnapshot&lt;/a> project. At that time, the VolumeSnapshot project was still in an experimental, pre-alpha stage. It needed contributors. So I volunteered to help. Then I worked with other community members to bring VolumeSnapshot to Alpha in K8s 1.12 release in 2018, Beta in K8s 1.17 in 2019, and eventually GA in 1.20 in 2020.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: Reading the &lt;a href="https://github.com/kubernetes/community/blob/master/sig-storage/charter.md">SIG Storage charter&lt;/a> alone it’s clear that SIG Storage covers a lot of ground, could you describe how the SIG is organised?&lt;/p>
&lt;p>&lt;strong>XY&lt;/strong>: In SIG Storage, there are two Co-Chairs and two Tech Leads. Saad Ali from Google and myself are Co-Chairs. Michelle Au from Google and Jan Šafránek from Red Hat are Tech Leads.&lt;/p>
&lt;p>We have bi-weekly meetings where we go through features we are working on for each particular release, getting the statuses, making sure each feature has dev owners and reviewers working on it, and reminding people about the release deadlines, etc. More information on the SIG is on the &lt;a href="https://github.com/kubernetes/community/tree/master/sig-storage">community page&lt;/a>. People can also add PRs that need attention, design proposals that need discussion, and other topics to the meeting agenda doc. We will go over them after project tracking is done.&lt;/p>
&lt;p>We also have other regular meetings, i.e., CSI Implementation meeting, Object Bucket API design meeting, and one-off meetings for specific topics if needed. There is also a K8s &lt;a href="https://github.com/kubernetes/community/blob/master/wg-data-protection/README.md">Data Protection Working Group&lt;/a> that is sponsored by SIG Storage and SIG Apps. SIG Storage owns or co-owns features that are being discussed at the Data Protection WG.&lt;/p>
&lt;h2 id="storage-and-kubernetes">Storage and Kubernetes&lt;/h2>
&lt;p>&lt;strong>FSM&lt;/strong>: Storage is such a foundational component in so many things, not least in Kubernetes: what do you think are the Kubernetes-specific challenges in terms of storage management?&lt;/p>
&lt;p>&lt;strong>XY&lt;/strong>: In Kubernetes, there are multiple components involved for a volume operation. For example, creating a Pod to use a PVC has multiple components involved. There are the Attach Detach Controller and the external-attacher working on attaching the PVC to the pod. There’s the kubelet that works on mounting the PVC to the pod. Of course the CSI driver is involved as well. There could be race conditions sometimes when coordinating between multiple components.&lt;/p>
&lt;p>Another challenge is regarding core versus &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/">CustomResourceDefinitions&lt;/a> (CRD), not really storage specific. CRD is a great way to extend Kubernetes capabilities while not adding too much code to the Kubernetes core itself. However, this also means there are many external components that are needed when running a Kubernetes cluster.&lt;/p>
&lt;p>From the SIG Storage side, one most notable example is VolumeSnapshot. Volume snapshot APIs are defined as CRDs. API definitions and controllers are out-of-tree. There is a common snapshot controller that should be deployed on the control plane, similar to how &lt;code>kube-controller-manager&lt;/code> is deployed. Although VolumeSnapshot is a CRD, it is a core feature of SIG Storage. It is recommended for the K8s cluster distros to deploy VolumeSnapshot CRDs, the snapshot controller, and the snapshot validation webhook, however, most of the time we &lt;em>don’t&lt;/em> see distros deploy them. So this becomes a problem for the storage vendors: now it becomes their responsibility to deploy these non-driver specific common components. This could cause conflicts if a customer wants to use more than one storage system and deploy more than one CSI driver.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: Not only the complexity of a single storage system, you have to consider how they will be used together in Kubernetes?&lt;/p>
&lt;p>&lt;strong>XY&lt;/strong>: Yes, there are many different storage systems that can provide storage to containers in Kubernetes. They don’t work the same way. It is challenging to find a solution that works for everyone.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: Storage in Kubernetes also involves interacting with external solutions, perhaps more so than other parts of Kubernetes. Is this interaction with vendors and external providers challenging? Has it evolved with time in any way?&lt;/p>
&lt;p>&lt;strong>XY&lt;/strong>: Yes, it is definitely challenging. Initially Kubernetes storage had in-tree volume plugin interfaces. Multiple storage vendors implemented in-tree interfaces and have volume plugins in the Kubernetes core code base. This caused lots of problems. If there is a bug in a volume plugin, it affects the entire Kubernetes code base. All volume plugins must be released together with Kubernetes. There was no flexibility if storage vendors need to fix a bug in their plugin or want to align with their own product release.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: That’s where CSI enters the game?&lt;/p>
&lt;p>&lt;strong>XY&lt;/strong>: Exactly, then there comes &lt;a href="https://kubernetes-csi.github.io/docs/">Container Storage Interface&lt;/a> (CSI). This is an industry standard trying to design common storage interfaces so that a storage vendor can write one plugin and have it work across a range of container orchestration (CO) systems. Now Kubernetes is the main CO, but back when CSI just started, there were Docker, Mesos, Cloud Foundry, in addition to Kubernetes. CSI drivers are out-of-tree so bug fixes and releases can happen at their own pace.&lt;/p>
&lt;p>CSI is definitely a big improvement compared to in-tree volume plugins. Kubernetes implementation of CSI has been GA &lt;a href="https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/">since the 1.13 release&lt;/a>. It has come a long way. SIG Storage has been working on moving in-tree volume plugins to out-of-tree CSI drivers for several releases now.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: Moving drivers away from the Kubernetes main tree and into CSI was an important improvement.&lt;/p>
&lt;p>&lt;strong>XY&lt;/strong>: CSI interface is an improvement over the in-tree volume plugin interface, however, there are still challenges. There are lots of storage systems. Currently &lt;a href="https://kubernetes-csi.github.io/docs/drivers.html">there are more than 100 CSI drivers listed in CSI driver docs&lt;/a>. These storage systems are also very diverse. So it is difficult to design a common API that works for all. We introduced capabilities at CSI driver level, but we also have challenges when volumes provisioned by the same driver have different behaviors. The other day we just had a meeting discussing Per Volume CSI Driver Capabilities. We have a problem differentiating some CSI driver capabilities when the same driver supports both block and file volumes. We are going to have follow up meetings to discuss this problem.&lt;/p>
&lt;h2 id="ongoing-challenges">Ongoing challenges&lt;/h2>
&lt;p>&lt;strong>FSM&lt;/strong>: Specifically for the &lt;a href="https://github.com/kubernetes/sig-release/tree/master/releases/release-1.25">1.25 release&lt;/a> we can see that there are a relevant number of storage-related &lt;a href="https://bit.ly/k8s125-enhancements">KEPs&lt;/a> in the pipeline, would you say that this release is particularly important for the SIG?&lt;/p>
&lt;p>&lt;strong>XY&lt;/strong>: I wouldn’t say one release is more important than other releases. In any given release, we are working on a few very important things.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: Indeed, but are there any 1.25 specific specificities and highlights you would like to point out though?&lt;/p>
&lt;p>&lt;strong>XY&lt;/strong>: Yes. For the 1.25 release, I want to highlight the following:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://kubernetes.io/blog/2021/12/10/storage-in-tree-to-csi-migration-status-update/#quick-recap-what-is-csi-migration-and-why-migrate">CSI Migration&lt;/a> is an on-going effort that SIG Storage has been working on for a few releases now. The goal is to move in-tree volume plugins to out-of-tree CSI drivers and eventually remove the in-tree volume plugins. There are 7 KEPs that we are targeting in 1.25 are related to CSI migration. There is one &lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/625-csi-migration">core KEP&lt;/a> for the general CSI Migration feature. That is targeting GA in 1.25. CSI Migration for GCE PD and AWS EBS are targeting GA. CSI Migration for vSphere is targeting to have the feature gate on by default while staying in 1.25 that are in Beta. Ceph RBD and PortWorx are targeting Beta, with feature gate off by default. Ceph FS is targeting Alpha.&lt;/li>
&lt;li>The second one I want to highlight is &lt;a href="https://github.com/kubernetes-sigs/container-object-storage-interface-spec">COSI, the Container Object Storage Interface&lt;/a>. This is a sub-project under SIG Storage. COSI proposes object storage Kubernetes APIs to support orchestration of object store operations for Kubernetes workloads. It also introduces gRPC interfaces for object storage providers to write drivers to provision buckets. The COSI team has been working on this project for more than two years now. The COSI feature is targeting Alpha in 1.25. The KEP just got merged. The COSI team is working on updating the implementation based on the updated KEP.&lt;/li>
&lt;li>Another feature I want to mention is &lt;a href="https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volumes">CSI Ephemeral Volume&lt;/a> support. This feature allows CSI volumes to be specified directly in the pod specification for ephemeral use cases. They can be used to inject arbitrary states, such as configuration, secrets, identity, variables or similar information, directly inside pods using a mounted volume. This was initially introduced in 1.15 as an alpha feature, and it is now &lt;a href="https://github.com/kubernetes/enhancements/issues/596">targeting GA&lt;/a> in 1.25.&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>FSM&lt;/strong>: If you had to single something out, what would be the most pressing areas the SIG is working on?&lt;/p>
&lt;p>&lt;strong>XY&lt;/strong>: CSI migration is definitely one area that the SIG has put in lots of effort and it has been on-going for multiple releases now. It involves work from multiple cloud providers and storage vendors as well.&lt;/p>
&lt;h2 id="community-involvement">Community involvement&lt;/h2>
&lt;p>&lt;strong>FSM&lt;/strong>: Kubernetes is a community-driven project. Any recommendation for anyone looking into getting involved in SIG Storage work? Where should they start?&lt;/p>
&lt;p>&lt;strong>XY&lt;/strong>: Take a look at the &lt;a href="https://github.com/kubernetes/community/tree/master/sig-storage">SIG Storage community page&lt;/a>, it has lots of information on how to get started. There are &lt;a href="https://github.com/kubernetes/community/blob/master/sig-storage/annual-report-2021.md">SIG annual reports&lt;/a> that tell you what we did each year. Take a look at the Contributing guide. It has links to presentations that can help you get familiar with Kubernetes storage concepts.&lt;/p>
&lt;p>Join our &lt;a href="https://github.com/kubernetes/community/tree/master/sig-storage#meetings">bi-weekly meetings on Thursdays&lt;/a>. Learn how the SIG operates and what we are working on for each release. Find a project that you are interested in and help out. As I mentioned earlier, I got started in SIG Storage by contributing to the Volume Snapshot project.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: Any closing thoughts you would like to add?&lt;/p>
&lt;p>&lt;strong>XY&lt;/strong>: SIG Storage always welcomes new contributors. We need contributors to help with building new features, fixing bugs, doing code reviews, writing tests, monitoring test grid health, and improving documentation, etc.&lt;/p>
&lt;p>&lt;strong>FSM&lt;/strong>: Thank you so much for your time and insights into the workings of SIG Storage!&lt;/p></description></item><item><title>Blog: Meet Our Contributors - APAC (China region)</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/08/15/meet-our-contributors-china-ep-03/</link><pubDate>Mon, 15 Aug 2022 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/08/15/meet-our-contributors-china-ep-03/</guid><description>
&lt;p>&lt;strong>Authors &amp;amp; Interviewers:&lt;/strong> &lt;a href="https://github.com/AvineshTripathi">Avinesh Tripathi&lt;/a>, &lt;a href="https://github.com/Debanitrkl">Debabrata Panigrahi&lt;/a>, &lt;a href="https://github.com/jayesh-srivastava">Jayesh Srivastava&lt;/a>, &lt;a href="https://github.com/Priyankasaggu11929/">Priyanka Saggu&lt;/a>, &lt;a href="https://github.com/PurneswarPrasad">Purneswar Prasad&lt;/a>, &lt;a href="https://github.com/vedant-kakde">Vedant Kakde&lt;/a>&lt;/p>
&lt;hr>
&lt;p>Hello, everyone 👋&lt;/p>
&lt;p>Welcome back to the third edition of the &amp;ldquo;Meet Our Contributors&amp;rdquo; blog post series for APAC.&lt;/p>
&lt;p>This post features four outstanding contributors from China, who have played diverse leadership and community roles in the upstream Kubernetes project.&lt;/p>
&lt;p>So, without further ado, let&amp;rsquo;s get straight to the article.&lt;/p>
&lt;h2 id="andy-zhanghttpsgithubcomandyzhangx">&lt;a href="https://github.com/andyzhangx">Andy Zhang&lt;/a>&lt;/h2>
&lt;p>Andy Zhang currently works for Microsoft China at the Shanghai site. His main focus is on Kubernetes storage drivers. Andy started contributing to Kubernetes about 5 years ago.&lt;/p>
&lt;p>He states that as he is working in Azure Kubernetes Service team and spends most of his time contributing to the Kubernetes community project. Now he is the main contributor of quite a lot Kubernetes subprojects such as Kubernetes cloud provider code.&lt;/p>
&lt;p>His open source contributions are mainly self-motivated. In the last two years he has mentored a few students contributing to Kubernetes through the LFX Mentorship program, some of whom got jobs due to their expertise and contributions on Kubernetes projects.&lt;/p>
&lt;p>Andy is an active member of the China Kubernetes community. He adds that the Kubernetes community has a good guide about how to become members, code reviewers, approvers and finally when he found out that some open source projects are in the very early stage, he actively contributed to those projects and became the project maintainer.&lt;/p>
&lt;h2 id="shiming-zhanghttpsgithubcomwzshiming">&lt;a href="https://github.com/wzshiming">Shiming Zhang&lt;/a>&lt;/h2>
&lt;p>Shiming Zhang is a Software Engineer working on Kubernetes for DaoCloud in Shanghai, China.&lt;/p>
&lt;p>He has mostly been involved with SIG Node as a reviewer. His major contributions have mainly been bug fixes and feature improvements in an ongoing &lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2712-pod-priority-based-graceful-node-shutdown">KEP&lt;/a>, all revolving around SIG Node.&lt;/p>
&lt;p>Some of his major PRs are &lt;a href="https://github.com/kubernetes/kubernetes/pull/100326">fixing watchForLockfileContention memory leak&lt;/a>, &lt;a href="https://github.com/kubernetes/kubernetes/pull/101093">fixing startupProbe behaviour&lt;/a>, &lt;a href="https://github.com/kubernetes/enhancements/pull/2661">adding Field status.hostIPs for Pod&lt;/a>.&lt;/p>
&lt;h2 id="paco-xuhttpsgithubcompacoxu">&lt;a href="https://github.com/pacoxu">Paco Xu&lt;/a>&lt;/h2>
&lt;p>Paco Xu works at DaoCloud, a Shanghai-based cloud-native firm. He works with the infra and the open source team, focusing on enterprise cloud native platforms based on Kubernetes.&lt;/p>
&lt;p>He started with Kubernetes in early 2017 and his first contribution was in March 2018. He started with a bug that he found, but his solution was not that graceful, hence wasn&amp;rsquo;t accepted. He then started with some good first issues, which helped him to a great extent. In addition to this, from 2016 to 2017, he made some minor contributions to Docker.&lt;/p>
&lt;p>Currently, Paco is a reviewer for &lt;code>kubeadm&lt;/code> (a SIG Cluster Lifecycle product), and for SIG Node.&lt;/p>
&lt;p>Paco says that you should contribute to open source projects you use. For him, an open source project is like a book to learn, getting inspired through discussions with the project maintainers.&lt;/p>
&lt;blockquote>
&lt;p>In my opinion, the best way for me is learning how owners work on the project.&lt;/p>
&lt;/blockquote>
&lt;h2 id="jintao-zhanghttpsgithubcomtao12345666333">&lt;a href="https://github.com/tao12345666333">Jintao Zhang&lt;/a>&lt;/h2>
&lt;p>Jintao Zhang is presently employed at API7, where he focuses on ingress and service mesh.&lt;/p>
&lt;p>In 2017, he encountered an issue which led to a community discussion and his contributions to Kubernetes started. Before contributing to Kubernetes, Jintao was a long-time contributor to Docker-related open source projects.&lt;/p>
&lt;p>Currently Jintao is a reviewer for the &lt;a href="https://kubernetes.github.io/ingress-nginx/">ingress-nginx&lt;/a> project.&lt;/p>
&lt;p>He suggests keeping track of job opportunities at open source companies so that you can find one that allows you to contribute full time. For new contributors Jintao says that if anyone wants to make a significant contribution to an open source project, then they should choose the project based on their interests and should generously invest time.&lt;/p>
&lt;hr>
&lt;p>If you have any recommendations/suggestions for who we should interview next, please let us know in the &lt;a href="https://kubernetes.slack.com/archives/C1TU9EB9S">#sig-contribex channel&lt;/a> channel on the Kubernetes Slack. Your suggestions would be much appreciated. We&amp;rsquo;re thrilled to have additional folks assisting us in reaching out to even more wonderful individuals of the community.&lt;/p>
&lt;p>We&amp;rsquo;ll see you all in the next one. Everyone, till then, have a happy contributing! 👋&lt;/p></description></item><item><title>Blog: Enhancing Kubernetes one KEP at a Time</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/08/11/enhancing-kubernetes-one-kep-at-a-time/</link><pubDate>Thu, 11 Aug 2022 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/08/11/enhancing-kubernetes-one-kep-at-a-time/</guid><description>
&lt;p>&lt;strong>Author:&lt;/strong> Ryler Hockenbury (Mastercard)&lt;/p>
&lt;p>Did you know that Kubernetes v1.24 has &lt;a href="https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/">46 enhancements&lt;/a>? That&amp;rsquo;s a lot of new functionality packed into a 4-month release cycle. The Kubernetes release team coordinates the logistics of the release, from remediating test flakes to publishing updated docs. It&amp;rsquo;s a ton of work, but they always deliver.&lt;/p>
&lt;p>The release team comprises around 30 people across six subteams - Bug Triage, CI Signal, Enhancements, Release Notes, Communications, and Docs.  Each of these subteams manages a component of the release. This post will focus on the role of the enhancements subteam and how you can get involved.&lt;/p>
&lt;h2 id="whats-the-enhancements-subteam">What&amp;rsquo;s the enhancements subteam?&lt;/h2>
&lt;p>Great question. We&amp;rsquo;ll get to that in a second but first, let&amp;rsquo;s talk about how features are managed in Kubernetes.&lt;/p>
&lt;p>Each new feature requires a &lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/README.md">Kubernetes Enhancement Proposal&lt;/a> - KEP for short. KEPs are small structured design documents that provide a way to propose and coordinate new features. The KEP author describes the motivation, design (and alternatives), risks, and tests - then community members provide feedback to build consensus.&lt;/p>
&lt;p>KEPs are submitted and updated through a pull request (PR) workflow on the &lt;a href="https://github.com/kubernetes/enhancements">k/enhancements repo&lt;/a>. Features start in alpha and move through a graduation process to beta and stable as they mature. For example, here&amp;rsquo;s a cool KEP about &lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-windows/1981-windows-privileged-container-support/kep.yaml">privileged container support on Windows Server&lt;/a>.  It was introduced as alpha in Kubernetes v1.22 and graduated to beta in v1.23.&lt;/p>
&lt;p>Now getting back to the question - the enhancements subteam coordinates the lifecycle tracking of the KEPs for each release. Each KEP is required to meet a set of requirements to be cleared for inclusion in a release. The enhancements subteam verifies each requirement for each KEP and tracks the status.&lt;/p>
&lt;p>At the start of a release, &lt;a href="https://github.com/kubernetes/community/blob/master/sig-list.md">Kubernetes Special Interest Groups&lt;/a> (SIGs) submit their enhancements to opt into a release. A typical release might have from 60 to 90 enhancements at the beginning.  During the release, many enhancements will drop out. Some do not quite meet the KEP requirements, and others do not complete their implementation in code. About 60%-70% of the opted-in KEPs will make it into the final release.&lt;/p>
&lt;h2 id="what-does-the-enhancements-subteam-do">What does the enhancements subteam do?&lt;/h2>
&lt;p>Another great question, keep them coming! The enhancements team is involved in two crucial milestones during each release: enhancements freeze and code freeze.&lt;/p>
&lt;h4 id="enhancements-freeze">Enhancements Freeze&lt;/h4>
&lt;p>Enhancements freeze is the deadline for a KEP to be complete in order for the enhancement to be included in a release. It&amp;rsquo;s a quality gate to enforce alignment around maintaining and updating KEPs. The most notable requirements are a (1) &lt;a href="https://github.com/kubernetes/community/blob/master/sig-architecture/production-readiness.md">production readiness review &lt;/a>(PRR) and a (2) &lt;a href="https://github.com/kubernetes/enhancements/tree/master/keps/NNNN-kep-template">KEP file&lt;/a> with a complete test plan and graduation criteria.&lt;/p>
&lt;p>The enhancements subteam communicates to each KEP author through comments on the KEP issue on Github. As a first step, they&amp;rsquo;ll verify the status and check if it meets the requirements.  The KEP gets marked as tracked after satisfying the requirements; otherwise, it&amp;rsquo;s considered at risk. If a KEP is still at risk when enhancement freeze is in effect, the KEP is removed from the release.&lt;/p>
&lt;p>This part of the cycle is typically the busiest for the enhancements subteam because of the large number of KEPs to groom, and each KEP might need to be visited multiple times to verify whether it meets requirements.&lt;/p>
&lt;h4 id="code-freeze">Code Freeze&lt;/h4>
&lt;p>Code freeze is the implementation deadline for all enhancements. The code must be implemented, reviewed, and merged by this point if a code change or update is needed for the enhancement. The latter third of the release is focused on stabilizing the codebase - fixing flaky tests, resolving various regressions, and preparing docs - and all the code needs to be in place before those steps can happen.&lt;/p>
&lt;p>The enhancements subteam verifies that all PRs for an enhancement are merged into the &lt;a href="https://github.com/kubernetes/kubernetes">Kubernetes codebase&lt;/a> (k/k). During this period, the subteam reaches out to KEP authors to understand what PRs are part of the KEP, verifies that those PRs get merged, and then updates the status of the KEP. The enhancement is removed from the release if the code isn&amp;rsquo;t all merged before the code freeze deadline.&lt;/p>
&lt;h2 id="how-can-i-get-involved-with-the-release-team">How can I get involved with the release team?&lt;/h2>
&lt;p>I&amp;rsquo;m glad you asked. The most direct way is to apply to be a &lt;a href="https://github.com/kubernetes/sig-release/blob/master/release-team/shadows.md">release team shadow&lt;/a>. The shadow role is a hands-on apprenticeship intended to prepare individuals for leadership positions on the release team. Many shadow roles are non-technical and do not require prior contributions to the Kubernetes codebase.&lt;/p>
&lt;p>With 3 Kubernetes releases every year and roughly 25 shadows per release, the release team is always in need of individuals wanting to contribute. Before each release cycle, the release team opens the application for the shadow program. When the application goes live, it&amp;rsquo;s posted in the &lt;a href="https://groups.google.com/a/kubernetes.io/g/dev">Kubernetes Dev Mailing List&lt;/a>.  You can subscribe to notifications from that list (or check it regularly!) to watch when the application opens. The announcement will typically go out in mid-April, mid-July, and mid-December - or roughly a month before the start of each release.&lt;/p>
&lt;h2 id="how-can-i-find-out-more">How can I find out more?&lt;/h2>
&lt;p>Check out the &lt;a href="https://github.com/kubernetes/sig-release/tree/master/release-team/role-handbooks">role handbooks&lt;/a> if you&amp;rsquo;re curious about the specifics of all the Kubernetes release subteams. The handbooks capture the logistics of each subteam, including a week-by-week breakdown of the subteam activities.  It&amp;rsquo;s an excellent reference for getting to know each team better.&lt;/p>
&lt;p>You can also check out the release-related Kubernetes slack channels - particularly #release, #sig-release, and #sig-arch. These channels have discussions and updates surrounding many aspects of the release.&lt;/p></description></item><item><title>Blog: Spotlight on SIG Docs</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/08/02/sig-docs-spotlight-2022/</link><pubDate>Tue, 02 Aug 2022 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/08/02/sig-docs-spotlight-2022/</guid><description>
&lt;p>&lt;strong>Author:&lt;/strong> Purneswar Prasad&lt;/p>
&lt;h2 id="introduction">Introduction&lt;/h2>
&lt;p>The official documentation is the go-to source for any open source project. For Kubernetes,
it&amp;rsquo;s an ever-evolving Special Interest Group (SIG) with people constantly putting in their efforts
to make details about the project easier to consume for new contributors and users. SIG Docs publishes
the official documentation on &lt;a href="https://kubernetes.io">kubernetes.io&lt;/a> which includes,
but is not limited to, documentation of the core APIs, core architectural details, and CLI tools
shipped with the Kubernetes release.&lt;/p>
&lt;p>To learn more about the work of SIG Docs and its future ahead in shaping the community, I have summarised
my conversation with the co-chairs, &lt;a href="https://twitter.com/Divya_Mohan02">Divya Mohan&lt;/a> (DM),
&lt;a href="https://twitter.com/reylejano">Rey Lejano&lt;/a> (RL) and Natali Vlatko (NV), who ran through the
SIG&amp;rsquo;s goals and how fellow contributors can help.&lt;/p>
&lt;h2 id="a-summary-of-the-conversation">A summary of the conversation&lt;/h2>
&lt;h3 id="could-you-tell-us-a-little-bit-about-what-sig-docs-does">Could you tell us a little bit about what SIG Docs does?&lt;/h3>
&lt;p>SIG Docs is the special interest group for documentation for the Kubernetes project on kubernetes.io,
generating reference guides for the Kubernetes API, kubeadm and kubectl as well as maintaining the official
website’s infrastructure and analytics. The remit of their work also extends to docs releases, translation of docs,
improvement and adding new features to existing documentation, pushing and reviewing content for the official
Kubernetes blog and engaging with the Release Team for each cycle to get docs and blogs reviewed.&lt;/p>
&lt;h3 id="there-are-2-subprojects-under-docs-blogs-and-localization-how-has-the-community-benefited-from-it-and-are-there-some-interesting-contributions-by-those-teams-you-want-to-highlight">There are 2 subprojects under Docs: blogs and localization. How has the community benefited from it and are there some interesting contributions by those teams you want to highlight?&lt;/h3>
&lt;p>&lt;strong>Blogs&lt;/strong>: This subproject highlights new or graduated Kubernetes enhancements, community reports, SIG updates
or any relevant news to the Kubernetes community such as thought leadership, tutorials and project updates,
such as the Dockershim removal and removal of PodSecurityPolicy, which is upcoming in the 1.25 release.
Tim Bannister, one of the SIG Docs tech leads, does awesome work and is a major force when pushing contributions
through to the docs and blogs.&lt;/p>
&lt;p>&lt;strong>Localization&lt;/strong>: With this subproject, the Kubernetes community has been able to achieve greater inclusivity
and diversity among both users and contributors. This has also helped the project gain more contributors,
especially students, since a couple of years ago.
One of the major highlights and up-and-coming localizations are Hindi and Bengali. The efforts for Hindi
localization are currently being spearheaded by students in India.&lt;/p>
&lt;p>In addition to that, there are two other subprojects: &lt;a href="https://github.com/kubernetes-sigs/reference-docs">reference-docs&lt;/a> and the &lt;a href="https://github.com/kubernetes/website">website&lt;/a>, which is built with Hugo and is an important ownership area.&lt;/p>
&lt;h3 id="dockershim-removal">Recently there has been a lot of buzz around the Kubernetes ecosystem as well as the industry regarding the removal of dockershim in the latest 1.24 release. How has SIG Docs helped the project to ensure a smooth change among the end-users?&lt;/h3>
&lt;p>Documenting the removal of Dockershim was a mammoth task, requiring the revamping of existing documentation
and communicating to the various stakeholders regarding the deprecation efforts. It needed a community effort,
so ahead of the 1.24 release, SIG Docs partnered with Docs and Comms verticals, the Release Lead from the
Release Team, and also the CNCF to help put the word out. Weekly meetings and a GitHub project board were
set up to track progress, review issues and approve PRs and keep the Kubernetes website updated. This has
also helped new contributors know about the depreciation, so that if any good-first-issue pops up, they could chip in.
A dedicated Slack channel was used to communicate meeting updates, invite feedback or to solicit help on
outstanding issues and PRs. The weekly meeting also continued for a month after the 1.24 release to review related issues and fix them.
A huge shoutout to &lt;a href="https://twitter.com/celeste_horgan">Celeste Horgan&lt;/a>, who kept the ball rolling on this
conversation throughout the deprecation process.&lt;/p>
&lt;h3 id="why-should-new-and-existing-contributors-consider-joining-this-sig">Why should new and existing contributors consider joining this SIG?&lt;/h3>
&lt;p>Kubernetes is a vast project and can be intimidating at first for a lot of folks to find a place to start.
Any open source project is defined by its quality of documentation and SIG Docs aims to be a welcoming,
helpful place for new contributors to get onboard. One gets the perks of working with the project docs
as well as learning by reading it. They can also bring their own, new perspective to create and improve
the documentation. In the long run if they stick to SIG Docs, they can rise up the ladder to be maintainers.
This will help make a big project like Kubernetes easier to parse and navigate.&lt;/p>
&lt;h3 id="how-do-you-help-new-contributors-get-started-are-there-any-prerequisites-to-join">How do you help new contributors get started? Are there any prerequisites to join?&lt;/h3>
&lt;p>There are no such prerequisites to get started with contributing to Docs. But there is certainly a fantastic
Contribution to Docs guide which is always kept as updated and relevant as possible and new contributors
are urged to read it and keep it handy. Also, there are a lot of useful pins and bookmarks in the
community Slack channel &lt;a href="https://kubernetes.slack.com/archives/C1J0BPD2M">#sig-docs&lt;/a>. GitHub issues with
the good-first-issue labels in the kubernetes/website repo is a great place to create your first PR.
Now, SIG Docs has a monthly New Contributor Meet and Greet on the first Tuesday of the month with the
first occupant of the New Contributor Ambassador role, &lt;a href="https://twitter.com/RinkiyaKeDad">Arsh Sharma&lt;/a>.
This has helped in making a more accessible point of contact within the SIG for new contributors.&lt;/p>
&lt;h3 id="any-sig-related-accomplishment-that-youre-really-proud-of">Any SIG related accomplishment that you’re really proud of?&lt;/h3>
&lt;p>&lt;strong>DM &amp;amp; RL&lt;/strong> : The formalization of the localization subproject in the last few months has been a big win
for SIG Docs, given all the great work put in by contributors from different countries. Earlier the
localization efforts didn’t have any streamlined process and focus was given to provide a structure by
drafting a KEP over the past couple of months for localization to be formalized as a subproject, which
is planned to be pushed through by the end of third quarter.&lt;/p>
&lt;p>&lt;strong>DM&lt;/strong> : Another area where there has been a lot of success is the New Contributor Ambassador role,
which has helped in making a more accessible point of contact for the onboarding of new contributors into the project.&lt;/p>
&lt;p>&lt;strong>NV&lt;/strong> : For each release cycle, SIG Docs have to review release docs and feature blogs highlighting
release updates within a short window. This is always a big effort for the docs and blogs reviewers.&lt;/p>
&lt;h3 id="is-there-something-exciting-coming-up-for-the-future-of-sig-docs-that-you-want-the-community-to-know">Is there something exciting coming up for the future of SIG Docs that you want the community to know?&lt;/h3>
&lt;p>SIG Docs is now looking forward to establishing a roadmap, having a steady pipeline of folks being able
to push improvements to the documentation and streamlining community involvement in triaging issues and
reviewing PRs being filed. To build one such contributor and reviewership base, a mentorship program is
being set up to help current contributors become reviewers. This definitely is a space to watch out for more!&lt;/p>
&lt;h2 id="wrap-up">Wrap Up&lt;/h2>
&lt;p>SIG Docs hosted a &lt;a href="https://www.youtube.com/watch?v=GDfcBF5et3Q">deep dive talk&lt;/a>
during on KubeCon + CloudNativeCon North America 2021, covering their awesome SIG.
They are very welcoming and have been the starting ground into Kubernetes
for a lot of new folks who want to contribute to the project.
Join the &lt;a href="https://github.com/kubernetes/community/blob/master/sig-docs/README.md">SIG&amp;rsquo;s meetings&lt;/a> to find out
about the most recent research results, their plans for the forthcoming year, and how to get involved in the upstream Docs team as a contributor!&lt;/p></description></item><item><title>Blog: Contextual Logging in Kubernetes 1.24</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/05/25/contextual-logging/</link><pubDate>Wed, 25 May 2022 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/05/25/contextual-logging/</guid><description>
&lt;p>&lt;strong>Authors:&lt;/strong> Patrick Ohly (Intel)&lt;/p>
&lt;p>The &lt;a href="https://github.com/kubernetes/community/blob/master/wg-structured-logging/README.md">Structured Logging Working
Group&lt;/a>
has added new capabilities to the logging infrastructure in Kubernetes
1.24. This blog post explains how developers can take advantage of those to
make log output more useful and how they can get involved with improving Kubernetes.&lt;/p>
&lt;h2 id="structured-logging">Structured logging&lt;/h2>
&lt;p>The goal of &lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-instrumentation/1602-structured-logging/README.md">structured
logging&lt;/a>
is to replace C-style formatting and the resulting opaque log strings with log
entries that have a well-defined syntax for storing message and parameters
separately, for example as a JSON struct.&lt;/p>
&lt;p>When using the traditional klog text output format for structured log calls,
strings were originally printed with &lt;code>\n&lt;/code> escape sequences, except when
embedded inside a struct. For structs, log entries could still span multiple
lines, with no clean way to split the log stream into individual entries:&lt;/p>
&lt;pre tabindex="0">&lt;code>I1112 14:06:35.783529 328441 structured_logging.go:51] &amp;#34;using InfoS&amp;#34; longData={Name:long Data:Multiple
lines
with quite a bit
of text. internal:0}
I1112 14:06:35.783549 328441 structured_logging.go:52] &amp;#34;using InfoS with\nthe message across multiple lines&amp;#34; int=1 stringData=&amp;#34;long: Multiple\nlines\nwith quite a bit\nof text.&amp;#34; str=&amp;#34;another value&amp;#34;
&lt;/code>&lt;/pre>&lt;p>Now, the &lt;code>&amp;lt;&lt;/code> and &lt;code>&amp;gt;&lt;/code> markers along with indentation are used to ensure that splitting at a
klog header at the start of a line is reliable and the resulting output is human-readable:&lt;/p>
&lt;pre tabindex="0">&lt;code>I1126 10:31:50.378204 121736 structured_logging.go:59] &amp;#34;using InfoS&amp;#34; longData=&amp;lt;
{Name:long Data:Multiple
lines
with quite a bit
of text. internal:0}
&amp;gt;
I1126 10:31:50.378228 121736 structured_logging.go:60] &amp;#34;using InfoS with\nthe message across multiple lines&amp;#34; int=1 stringData=&amp;lt;
long: Multiple
lines
with quite a bit
of text.
&amp;gt; str=&amp;#34;another value&amp;#34;
&lt;/code>&lt;/pre>&lt;p>Note that the log message itself is printed with quoting. It is meant to be a
fixed string that identifies a log entry, so newlines should be avoided there.&lt;/p>
&lt;p>Before Kubernetes 1.24, some log calls in kube-scheduler still used &lt;code>klog.Info&lt;/code>
for multi-line strings to avoid the unreadable output. Now all log calls have
been updated to support structured logging.&lt;/p>
&lt;h2 id="contextual-logging">Contextual logging&lt;/h2>
&lt;p>&lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-instrumentation/3077-contextual-logging/README.md">Contextual logging&lt;/a>
is based on the &lt;a href="https://github.com/go-logr/logr#a-minimal-logging-api-for-go">go-logr API&lt;/a>. The key
idea is that libraries are passed a logger instance by their caller and use
that for logging instead of accessing a global logger. The binary decides about
the logging implementation, not the libraries. The go-logr API is designed
around structured logging and supports attaching additional information to a
logger.&lt;/p>
&lt;p>This enables additional use cases:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>The caller can attach additional information to a logger:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://pkg.go.dev/github.com/go-logr/logr#Logger.WithName">&lt;code>WithName&lt;/code>&lt;/a> adds a prefix&lt;/li>
&lt;li>&lt;a href="https://pkg.go.dev/github.com/go-logr/logr#Logger.WithValues">&lt;code>WithValues&lt;/code>&lt;/a> adds key/value pairs&lt;/li>
&lt;/ul>
&lt;p>When passing this extended logger into a function and a function uses it
instead of the global logger, the additional information is
then included in all log entries, without having to modify the code that
generates the log entries. This is useful in highly parallel applications
where it can become hard to identify all log entries for a certain operation
because the output from different operations gets interleaved.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>When running unit tests, log output can be associated with the current test.
Then when a test fails, only the log output of the failed test gets shown
by &lt;code>go test&lt;/code>. That output can also be more verbose by default because it
will not get shown for successful tests. Tests can be run in parallel
without interleaving their output.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>One of the design decisions for contextual logging was to allow attaching a
logger as value to a &lt;code>context.Context&lt;/code>. Since the logger encapsulates all
aspects of the intended logging for the call, it is &lt;em>part&lt;/em> of the context and
not just &lt;em>using&lt;/em> it. A practical advantage is that many APIs already have a
&lt;code>ctx&lt;/code> parameter or adding one has additional advantages, like being able to get
rid of &lt;code>context.TODO()&lt;/code> calls inside the functions.&lt;/p>
&lt;p>Another decision was to not break compatibility with klog v2:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Libraries that use the traditional klog logging calls in a binary that has
set up contextual logging will work and log through the logging backend
chosen by the binary. However, such log output will not include the
additional information and will not work well in unit tests, so libraries
should be modified to support contextual logging. The &lt;a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/migration-to-structured-logging.md">migration guide&lt;/a>
for structured logging has been extended to also cover contextual logging.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>When a library supports contextual logging and retrieves a logger from its
context, it will still work in a binary that does not initialize contextual
logging because it will get a logger that logs through klog.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>In Kubernetes 1.24, contextual logging is a new alpha feature with
&lt;code>ContextualLogging&lt;/code> as feature gate. When disabled (the default), the new klog
API calls for contextual logging (see below) become no-ops to avoid performance
or functional regressions.&lt;/p>
&lt;p>No Kubernetes component has been converted yet. An &lt;a href="https://github.com/kubernetes/kubernetes/blob/v1.24.0-beta.0/staging/src/k8s.io/component-base/logs/example/cmd/logger.go">example program&lt;/a>
in the Kubernetes repository demonstrates how to enable contextual logging in a
binary and how the output depends on the binary&amp;rsquo;s parameters:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-console" data-lang="console">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#000080;font-weight:bold">$&lt;/span> &lt;span style="color:#a2f">cd&lt;/span> &lt;span style="color:#b8860b">$GOPATH&lt;/span>/src/k8s.io/kubernetes/staging/src/k8s.io/component-base/logs/example/cmd/
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#000080;font-weight:bold">$&lt;/span> go run . --help
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">...
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888"> --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888"> AllAlpha=true|false (ALPHA - default=false)
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888"> AllBeta=true|false (BETA - default=false)
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888"> ContextualLogging=true|false (ALPHA - default=false)
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">&lt;/span>&lt;span style="color:#000080;font-weight:bold">$&lt;/span> go run . --feature-gates &lt;span style="color:#b8860b">ContextualLogging&lt;/span>&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#a2f">true&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">...
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">I0404 18:00:02.916429 451895 logger.go:94] &amp;#34;example/myname: runtime&amp;#34; foo=&amp;#34;bar&amp;#34; duration=&amp;#34;1m0s&amp;#34;
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">I0404 18:00:02.916447 451895 logger.go:95] &amp;#34;example: another runtime&amp;#34; foo=&amp;#34;bar&amp;#34; duration=&amp;#34;1m0s&amp;#34;
&lt;/span>&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>The &lt;code>example&lt;/code> prefix and &lt;code>foo=&amp;quot;bar&amp;quot;&lt;/code> were added by the caller of the function
which logs the &lt;code>runtime&lt;/code> message and &lt;code>duration=&amp;quot;1m0s&amp;quot;&lt;/code> value.&lt;/p>
&lt;p>The sample code for klog includes an
&lt;a href="https://github.com/kubernetes/klog/blob/v2.60.1/ktesting/example/example_test.go">example&lt;/a>
for a unit test with per-test output.&lt;/p>
&lt;h2 id="klog-enhancements">klog enhancements&lt;/h2>
&lt;h3 id="contextual-logging-api">Contextual logging API&lt;/h3>
&lt;p>The following calls manage the lookup of a logger:&lt;/p>
&lt;dl>
&lt;dt>&lt;a href="https://pkg.go.dev/k8s.io/klog/v2#FromContext">&lt;code>FromContext&lt;/code>&lt;/a>&lt;/dt>
&lt;dd>from a &lt;code>context&lt;/code> parameter, with fallback to the global logger&lt;/dd>
&lt;dt>&lt;a href="https://pkg.go.dev/k8s.io/klog/v2#Background">&lt;code>Background&lt;/code>&lt;/a>&lt;/dt>
&lt;dd>the global fallback, with no intention to support contextual logging&lt;/dd>
&lt;dt>&lt;a href="https://pkg.go.dev/k8s.io/klog/v2#TODO">&lt;code>TODO&lt;/code>&lt;/a>&lt;/dt>
&lt;dd>the global fallback, but only as a temporary solution until the function gets extended to accept
a logger through its parameters&lt;/dd>
&lt;dt>&lt;a href="https://pkg.go.dev/k8s.io/klog/v2#SetLoggerWithOptions">&lt;code>SetLoggerWithOptions&lt;/code>&lt;/a>&lt;/dt>
&lt;dd>changes the fallback logger; when called with &lt;a href="https://pkg.go.dev/k8s.io/klog/v2#ContextualLogger">&lt;code>ContextualLogger(true)&lt;/code>&lt;/a>,
the logger is ready to be called directly, in which case logging will be done
without going through klog&lt;/dd>
&lt;/dl>
&lt;p>To support the feature gate mechanism in Kubernetes, klog has wrapper calls for
the corresponding go-logr calls and a global boolean controlling their behavior:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://pkg.go.dev/k8s.io/klog/v2#LoggerWithName">&lt;code>LoggerWithName&lt;/code>&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://pkg.go.dev/k8s.io/klog/v2#LoggerWithValues">&lt;code>LoggerWithValues&lt;/code>&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://pkg.go.dev/k8s.io/klog/v2#NewContext">&lt;code>NewContext&lt;/code>&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://pkg.go.dev/k8s.io/klog/v2#EnableContextualLogging">&lt;code>EnableContextualLogging&lt;/code>&lt;/a>&lt;/li>
&lt;/ul>
&lt;p>Usage of those functions in Kubernetes code is enforced with a linter
check. The klog default for contextual logging is to enable the functionality
because it is considered stable in klog. It is only in Kubernetes binaries
where that default gets overridden and (in some binaries) controlled via the
&lt;code>--feature-gate&lt;/code> parameter.&lt;/p>
&lt;h3 id="ktesting-logger">ktesting logger&lt;/h3>
&lt;p>The new &lt;a href="https://pkg.go.dev/k8s.io/klog/v2@v2.60.1/ktesting">ktesting&lt;/a> package
implements logging through &lt;code>testing.T&lt;/code> using klog&amp;rsquo;s text output format. It has
a &lt;a href="https://pkg.go.dev/k8s.io/klog/v2@v2.60.1/ktesting#NewTestContext">single API call&lt;/a> for
instrumenting a test case and &lt;a href="https://pkg.go.dev/k8s.io/klog/v2@v2.60.1/ktesting/init">support for command line flags&lt;/a>.&lt;/p>
&lt;h3 id="klogr">klogr&lt;/h3>
&lt;p>&lt;a href="https://pkg.go.dev/k8s.io/klog/v2@v2.60.1/klogr">&lt;code>klog/klogr&lt;/code>&lt;/a> continues to be
supported and it&amp;rsquo;s default behavior is unchanged: it formats structured log
entries using its own, custom format and prints the result via klog.&lt;/p>
&lt;p>However, this usage is discouraged because that format is neither
machine-readable (in contrast to real JSON output as produced by zapr, the
go-logr implementation used by Kubernetes) nor human-friendly (in contrast to
the klog text format).&lt;/p>
&lt;p>Instead, a klogr instance should be created with
&lt;a href="https://pkg.go.dev/k8s.io/klog/v2@v2.60.1/klogr#WithFormat">&lt;code>WithFormat(FormatKlog)&lt;/code>&lt;/a>
which chooses the klog text format. A simpler construction method with the same
result is the new
&lt;a href="https://pkg.go.dev/k8s.io/klog/v2#NewKlogr">&lt;code>klog.NewKlogr&lt;/code>&lt;/a>. That is the
logger that klog returns as fallback when nothing else is configured.&lt;/p>
&lt;h3 id="reusable-output-test">Reusable output test&lt;/h3>
&lt;p>A lot of go-logr implementations have very similar unit tests where they check
the result of certain log calls. If a developer didn&amp;rsquo;t know about certain
caveats like for example a &lt;code>String&lt;/code> function that panics when called, then it
is likely that both the handling of such caveats and the unit test are missing.&lt;/p>
&lt;p>&lt;a href="https://pkg.go.dev/k8s.io/klog/v2@v2.60.1/test">&lt;code>klog.test&lt;/code>&lt;/a> is a reusable set
of test cases that can be applied to a go-logr implementation.&lt;/p>
&lt;h3 id="output-flushing">Output flushing&lt;/h3>
&lt;p>klog used to start a goroutine unconditionally during &lt;code>init&lt;/code> which flushed
buffered data at a hard-coded interval. Now that goroutine is only started on
demand (i.e. when writing to files with buffering) and can be controlled with
&lt;a href="https://pkg.go.dev/k8s.io/klog/v2#StopFlushDaemon">&lt;code>StopFlushDaemon&lt;/code>&lt;/a> and
&lt;a href="https://pkg.go.dev/k8s.io/klog/v2#StartFlushDaemon">&lt;code>StartFlushDaemon&lt;/code>&lt;/a>.&lt;/p>
&lt;p>When a go-logr implementation buffers data, flushing that data can be
integrated into &lt;a href="https://pkg.go.dev/k8s.io/klog/v2#Flush">&lt;code>klog.Flush&lt;/code>&lt;/a> by
registering the logger with the
&lt;a href="https://pkg.go.dev/k8s.io/klog/v2#FlushLogger">&lt;code>FlushLogger&lt;/code>&lt;/a> option.&lt;/p>
&lt;h3 id="various-other-changes">Various other changes&lt;/h3>
&lt;p>For a description of all other enhancements see in the &lt;a href="https://github.com/kubernetes/klog/releases">release notes&lt;/a>.&lt;/p>
&lt;h2 id="logcheck">logcheck&lt;/h2>
&lt;p>Originally designed as a linter for structured log calls, the
&lt;a href="https://github.com/kubernetes/klog/tree/788efcdee1e9be0bfbe5b076343d447314f2377e/hack/tools/logcheck">&lt;code>logcheck&lt;/code>&lt;/a>
tool has been enhanced to support also contextual logging and traditional klog
log calls. These enhanced checks already found bugs in Kubernetes, like calling
&lt;code>klog.Info&lt;/code> instead of &lt;code>klog.Infof&lt;/code> with a format string and parameters.&lt;/p>
&lt;p>It can be included as a plugin in a &lt;code>golangci-lint&lt;/code> invocation, which is how
&lt;a href="https://github.com/kubernetes/kubernetes/commit/17e3c555c5115f8c9176bae10ba45baa04d23a7b">Kubernetes uses it now&lt;/a>,
or get invoked stand-alone.&lt;/p>
&lt;p>We are in the process of &lt;a href="https://github.com/kubernetes/klog/issues/312">moving the tool&lt;/a> into a new repository because it isn&amp;rsquo;t
really related to klog and its releases should be tracked and tagged properly.&lt;/p>
&lt;h2 id="next-steps">Next steps&lt;/h2>
&lt;p>The &lt;a href="https://github.com/kubernetes/community/tree/master/wg-structured-logging">Structured Logging WG&lt;/a>
is always looking for new contributors. The migration
away from C-style logging is now going to target structured, contextual logging
in one step to reduce the overall code churn and number of PRs. Changing log
calls is good first contribution to Kubernetes and an opportunity to get to
know code in various different areas.&lt;/p></description></item><item><title>Blog: February 2022 Community Meeting Highlights</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/05/05/community-meeting-februrary-2022/</link><pubDate>Thu, 05 May 2022 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/05/05/community-meeting-februrary-2022/</guid><description>
&lt;p>&lt;strong>Author:&lt;/strong> Nigel Brown (VMware)&lt;/p>
&lt;p>We just had our first contributor community meeting this year, and it was awesome to be back with you
in that format. These meetings will be happening on Zoom once per month, on the third Thursday of the
month - that should be available in your calendar if you’re subscribed to the k-dev mailing list.
Community meetings are an opportunity for you to meet synchronously with other members of the
Kubernetes community to talk about issues of general appeal.&lt;/p>
&lt;p>This meeting kicked off with an update on the 1.24 release with Xander Grzywinski, who is one of the
shadows for the release team leads. This release is scheduled for April 19, 2022 with a code freeze
scheduled for March 30th. At the time of the meeting there were 66 individual enhancements included,
as well as bug fixes. You can join the conversation on Slack in
&lt;a href="https://kubernetes.slack.com/archives/C2C40FMNF">#sig-release&lt;/a>.&lt;/p>
&lt;p>&lt;em>Update:&lt;/em> Kubernetes 1.24 was delayed and released on May 3, 2022.&lt;/p>
&lt;p>From there, the discussion moved to the dockershim removal and the docs updates we need to make
around that, with the discussion led by Kat Cosgrove. The main takeaway was that if you have a
platform, talk to folks about this change. To most interacting with Kubernetes, it is probably not as
impactful as it sounds. We have a &lt;a href="https://kubernetes.io/dockershim">helpful FAQ&lt;/a> if you need. You
can even try out an alpha release from the &lt;a href="https://github.com/kubernetes/kubernetes/releases?q=v1.24.0-alpha">1.24 release page&lt;/a>.&lt;/p>
&lt;p>We moved on to a spirited discussion of a Kubernetes Enhancement Proposal (KEP) about raising the bar
for reliability brought by Wojciech Tyczynski. It was emphasized that effort on this proposal should
be a collaborative effort with SIG Testing who are managing dashboards on test flakiness among other
metrics. You can find the proposed &lt;a href="https://github.com/kubernetes/enhancements/pull/3139">KEP&lt;/a> on
GitHub.&lt;/p>
&lt;p>Finally, Paris mentioned the k-dev migration of the developer mailing list. If you manage Google Docs
assets, you may need to share them with the new developer list. New community members may not be able
to join from assets shared with the old lists.&lt;/p>
&lt;p>You can find the
&lt;a href="https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY/edit?pli=1#heading=h.lk3ecc5rt40z">full meeting notes&lt;/a>
posted online (thanks Josh Berkus!) as well as the
&lt;a href="https://www.youtube.com/watch?v=qwLsGfqHEhk">recording on YouTube&lt;/a>. If you have topics you would
like to discuss or you’re interested in being the host of a future community meeting, please reach
out to Laura Santamaria (@nimbinatus) on Slack.&lt;/p></description></item><item><title>Blog: K8s CI Bot Helper Job: automating "make update"</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/03/15/k8s-triage-bot-helper-ci-job/</link><pubDate>Tue, 15 Mar 2022 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/03/15/k8s-triage-bot-helper-ci-job/</guid><description>
&lt;p>&lt;strong>Authors:&lt;/strong> &lt;a href="https://github.com/SubhasmitaSw">Subhasmita Swain&lt;/a>, &lt;a href="https://github.com/dims">Davanum Srinivas&lt;/a>&lt;/p>
&lt;hr>
&lt;p>If you are contributing to the Kubernetes project and are developing on a Windows PC, it is conceivable that you will encounter certain issues that will cause your pull request to get held up by test failures. This article describes a workaround for a similar issue I encountered when attempting to have my modifications approved and merged into the master branch.&lt;/p>
&lt;h2 id="why-is-this-needed">Why is this needed?&lt;/h2>
&lt;p>While contributing to &lt;a href="https://github.com/kubernetes/kubernetes">kubernetes/kubernetes&lt;/a> for some minor documentation changes, the pushed changes needed to be updated with other verified contents of the entire documentation. So, in order for the change to take effect, a single command must be performed to ensure that all tests on the CI pipeline pass. The single command &lt;code>make update&lt;/code> runs all presubmission verification tests. For some reason on the &amp;ldquo;Windows Subsystem for Linux&amp;rdquo; environment the tests, specifically the &lt;a href="https://github.com/kubernetes/kubernetes/blob/master/hack/update-openapi-spec.sh">update-openapi-spec.sh&lt;/a> script, failed (in my case, take a look at the conversation &lt;a href="https://github.com/kubernetes/kubernetes/pull/107691">here&lt;/a>), eventually failing the &lt;code>pull-kubernetes-verify&lt;/code> tests.&lt;/p>
&lt;p>You might encounter the following on your PR&lt;/p>
&lt;p>The tests failing the particular issue:&lt;/p>
&lt;p>&lt;img alt="Failed Test Cases" src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/03/15/k8s-triage-bot-helper-ci-job/failing_test_cases.png">&lt;/p>
&lt;p>Consecutively,&lt;/p>
&lt;p>&lt;img alt="Failed Test Cases robot comment" src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/03/15/k8s-triage-bot-helper-ci-job/k8s_ci_failed_tests.png">&lt;/p>
&lt;p>Additionally one can check the failed test via the &lt;code>link&lt;/code> provided under details in the above image.&lt;/p>
&lt;h3 id="potential-workaround">Potential Workaround&lt;/h3>
&lt;p>Run the failing &lt;code>.sh&lt;/code> scripts individually known from the CI job output, to generate the expected files to fix up the failures. The &lt;code>.sh&lt;/code> scripts can be found residing under the &lt;code>hack/&lt;/code> directory at the root of the &lt;code>kubernetes/kubernetes&lt;/code> code base.&lt;/p>
&lt;p>&lt;code>kubernetes/kubernetes&lt;/code> → &lt;code>hack/*.sh&lt;/code>&lt;/p>
&lt;p>In this particular case, these files were to be run:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://github.com/kubernetes/kubernetes/blob/master/hack/update-generated-protobuf.sh">hack/update-generated-protobuf.sh&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://github.com/kubernetes/kubernetes/blob/master/hack/update-generated-swagger-docs.sh">hack/update-generated-swagger-docs.sh&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://github.com/kubernetes/kubernetes/blob/master/hack/update-openapi-spec.sh">hack/update-openapi-spec.sh&lt;/a>&lt;/li>
&lt;/ul>
&lt;p>After you run these, you might see either of the below issues:&lt;/p>
&lt;hr>
&lt;p>Using Codespaces:&lt;/p>
&lt;p>&lt;img alt="codespaces error output" src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/03/15/k8s-triage-bot-helper-ci-job/codespaces.png">
&lt;img alt="codespaces error output" src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/03/15/k8s-triage-bot-helper-ci-job/codespaces2.png">&lt;/p>
&lt;hr>
&lt;p>Using Visual Studio Code:&lt;/p>
&lt;p>&lt;img alt="vscode error output" src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/03/15/k8s-triage-bot-helper-ci-job/vscode.png">&lt;/p>
&lt;h3 id="possible-solutions-to-counter-the-above-errors">Possible solutions to counter the above errors&lt;/h3>
&lt;ol>
&lt;li>Remove the Makefile and Makefile.generated_files files:&lt;/li>
&lt;/ol>
&lt;pre tabindex="0">&lt;code> rm Makefile Makefile.generated_files
&lt;/code>&lt;/pre>&lt;ol start="2">
&lt;li>Create the symlinks:&lt;/li>
&lt;/ol>
&lt;pre tabindex="0">&lt;code> ln -s build/root/Makefile Makefile
&lt;/code>&lt;/pre>&lt;pre tabindex="0">&lt;code> ln -s build/root/Makefile.generated_files Makefile.generated_files
&lt;/code>&lt;/pre>&lt;blockquote>
&lt;p>PS: takes adequate time to generate the scripts on windows.&lt;/p>
&lt;/blockquote>
&lt;h2 id="the-current-situation">The current situation&lt;/h2>
&lt;p>Many contributors do not have access to powerful environments in which to run &lt;code>make update&lt;/code> or &lt;code>make verify&lt;/code>. They can utilise &lt;em>vscode/wsl/codespaces&lt;/em> and other tools to recommend modifications, but they might get tripped up by &lt;code>make verify&lt;/code> scripts because in many cases, we end up with files that needs to be re-generated. It&amp;rsquo;s a tall order for them to scan the build log from &lt;code>make verify&lt;/code> to determine which specific scripts in &lt;code>hack/&lt;/code> directory they need to run.&lt;/p>
&lt;h3 id="solutions">Solutions&lt;/h3>
&lt;p>According to &lt;a href="https://github.com/dims">@dims&lt;/a>, a &lt;a href="https://github.com/kubernetes/kubernetes/issues/109374#issuecomment-1092155063">long term solution&lt;/a> for this problem would be to add a new Prow bot command that generates an additional commit on their PR.&lt;/p>
&lt;p>In the meantime, the short term workaround is to add a CI job that folks can trigger when they need it, that runs &lt;code>make update&lt;/code> and provides a zip archive that they can be told to download. The archive includes all the files that changed as a result of running &lt;code>make update&lt;/code>.&lt;/p>
&lt;h3 id="implementing-the-short-term-workaround">Implementing the Short Term Workaround&lt;/h3>
&lt;p>The problem was that for some of the verify scripts to work well, the scripts needed both Linux
and plenty of local resources (CPU, memory). Once we realized this issue, we decided to add a new CI job. This CI job is named &lt;code>pull-kubernetes-update&lt;/code>.&lt;/p>
&lt;p>You can trigger this CI job by commenting &lt;code>/test pull-kubernetes-update&lt;/code> as a Prow bot command in any Kubernetes PR. This CI job runs &lt;code>make update&lt;/code> and then generates a zip archive named &lt;code>updated-files.zip&lt;/code> in the artifacts directory for that job.&lt;/p>
&lt;p>You can then download the archive which has the changes made when &lt;code>make update&lt;/code> was run;
once the download is finished, you can update your PR with the newly updated code.&lt;/p>
&lt;h4 id="using-the-short-term-solution">Using the short-term solution&lt;/h4>
&lt;p>Here&amp;rsquo;s what you can do as a contributor to get automated help updating your pull request with generated files.&lt;/p>
&lt;p>Make sure to rebase your working branch against the latest base branch (usually this is &lt;code>master&lt;/code>), so that your PR includes the most recent upstream commits. Remember to push or force-push your changes.&lt;/p>
&lt;p>On your PR, write a new comment that contains only the text:&lt;/p>
&lt;pre tabindex="0">&lt;code> /test pull-kubernetes-update
&lt;/code>&lt;/pre>&lt;p>&lt;img alt="pull-kubernetes-update command" src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/03/15/k8s-triage-bot-helper-ci-job/tpku_command.png">&lt;/p>
&lt;p>You will see the automated check lists and details.&lt;/p>
&lt;p>&lt;img alt="command preview on github" src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/03/15/k8s-triage-bot-helper-ci-job/checklists.png">&lt;/p>
&lt;p>Once the checks are complete, click the &lt;code>Details&lt;/code> link on &lt;code>pull-kubernetes-update&lt;/code> job to go to the artifacts directory and download the &lt;code>updated-files.zip&lt;/code> file.&lt;/p>
&lt;p>&lt;img alt="automated checklist of tests appearing after update command execution" src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/03/15/k8s-triage-bot-helper-ci-job/completed_checklist.png">&lt;/p>
&lt;p>&lt;img alt="completed checklist of tests" src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/03/15/k8s-triage-bot-helper-ci-job/artifacts.png">&lt;/p>
&lt;p>&lt;img alt="artifacts directory" src="https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/03/15/k8s-triage-bot-helper-ci-job/jenkins.png">&lt;/p>
&lt;p>Now, update the PR by adding the extracted files you downloaded.&lt;/p>
&lt;h2 id="conclusion">Conclusion&lt;/h2>
&lt;p>For the time being, several fantastic people are working on a bot command that will produce an additional commit with the generated files for the failed tests. In the long term, it will make things simpler. If anything becomes confusing at any point, we urge any and all inquiries to be directed on slack, regardless of experience level or complexity! We hope this shortens your debugging time and alleviates some of your concerns!&lt;/p></description></item><item><title>Blog: Meet Our Contributors - APAC (Aus-NZ region)</title><link>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/03/14/meet-our-contributors-au-nz-ep-02/</link><pubDate>Mon, 14 Mar 2022 00:00:00 +0000</pubDate><guid>https://deploy-preview-670--kubernetes-contributor.netlify.app/blog/2022/03/14/meet-our-contributors-au-nz-ep-02/</guid><description>
&lt;p>&lt;strong>Authors &amp;amp; Interviewers:&lt;/strong> &lt;a href="https://github.com/anubha-v-ardhan">Anubhav Vardhan&lt;/a>, &lt;a href="https://github.com/Atharva-Shinde">Atharva Shinde&lt;/a>, &lt;a href="https://github.com/AvineshTripathi">Avinesh Tripathi&lt;/a>, &lt;a href="https://github.com/bradmccoydev">Brad McCoy&lt;/a>, &lt;a href="https://github.com/Debanitrkl">Debabrata Panigrahi&lt;/a>, &lt;a href="https://github.com/jayesh-srivastava">Jayesh Srivastava&lt;/a>, &lt;a href="https://github.com/verma-kunal">Kunal Verma&lt;/a>, &lt;a href="https://github.com/PranshuSrivastava">Pranshu Srivastava&lt;/a>, &lt;a href="https://github.com/Priyankasaggu11929">Priyanka Saggu&lt;/a>, &lt;a href="https://github.com/PurneswarPrasad">Purneswar Prasad&lt;/a>, &lt;a href="https://github.com/vedant-kakde">Vedant Kakde&lt;/a>&lt;/p>
&lt;hr>
&lt;p>Good day, everyone 👋&lt;/p>
&lt;p>Welcome back to the second episode of the &amp;ldquo;Meet Our Contributors&amp;rdquo; blog post series for APAC.&lt;/p>
&lt;p>This post will feature four outstanding contributors from the Australia and New Zealand regions, who have played diverse leadership and community roles in the Upstream Kubernetes project.&lt;/p>
&lt;p>So, without further ado, let&amp;rsquo;s get straight to the article.&lt;/p>
&lt;h2 id="caleb-woodbinehttpsgithubcombobymcbobs">&lt;a href="https://github.com/BobyMCbobs">Caleb Woodbine&lt;/a>&lt;/h2>
&lt;p>Caleb Woodbine is currently a member of the ii.nz organisation.&lt;/p>
&lt;p>He began contributing to the Kubernetes project in 2018 as a member of the Kubernetes Conformance working group. His experience was positive, and he benefited from early guidance from &lt;a href="https://github.com/hh">Hippie Hacker&lt;/a>, a fellow contributor from New Zealand.&lt;/p>
&lt;p>He has made major contributions to Kubernetes project since then through &lt;code>SIG k8s-infra&lt;/code> and &lt;code>k8s-conformance&lt;/code> working group.&lt;/p>
&lt;p>Caleb is also a co-organizer of the &lt;a href="https://www.meetup.com/cloudnative-nz/">CloudNative NZ&lt;/a> community events, which aim to expand the reach of Kubernetes project throughout New Zealand in order to encourage technical education and improved employment opportunities.&lt;/p>
&lt;blockquote>
&lt;p>&lt;em>There need to be more outreach in APAC and the educators and universities must pick up Kubernetes, as they are very slow and about 8+ years out of date. NZ tends to rather pay overseas than educate locals on the latest cloud tech Locally.&lt;/em>&lt;/p>
&lt;/blockquote>
&lt;h2 id="dylan-grahamhttpsgithubcomdylangraham">&lt;a href="https://github.com/DylanGraham">Dylan Graham&lt;/a>&lt;/h2>
&lt;p>Dylan Graham is a cloud engineer from Adeliade, Australia. He has been contributing to the upstream Kubernetes project since 2018.&lt;/p>
&lt;p>He stated that being a part of such a large-scale project was initially overwhelming, but that the community&amp;rsquo;s friendliness and openness assisted him in getting through it.&lt;/p>
&lt;p>He began by contributing to the project documentation and is now mostly focused on the community support for the APAC region.&lt;/p>
&lt;p>He believes that consistent attendance at community/project meetings, taking on project tasks, and seeking community guidance as needed can help new aspiring developers become effective contributors.&lt;/p>
&lt;blockquote>
&lt;p>&lt;em>The feeling of being a part of a large community is really special. I&amp;rsquo;ve met some amazing people, even some before the pandemic in real life :)&lt;/em>&lt;/p>
&lt;/blockquote>
&lt;h2 id="hippie-hackerhttpsgithubcomhh">&lt;a href="https://github.com/hh">Hippie Hacker&lt;/a>&lt;/h2>
&lt;p>Hippie has worked for the CNCF.io as a Strategic Initiatives contractor from New Zealand for almost 5+ years. He is an active contributor to k8s-infra, API conformance testing, Cloud provider conformance submissions, and apisnoop.cncf.io domains of the upstream Kubernetes &amp;amp; CNCF projects.&lt;/p>
&lt;p>He recounts their early involvement with the Kubernetes project, which began roughly 5 years ago when their firm, ii.nz, demonstrated &lt;a href="https://ii.nz/post/bringing-the-cloud-to-your-community/">network booting from a Raspberry Pi using PXE and running Gitlab in-cluster to install Kubernetes on servers&lt;/a>.&lt;/p>
&lt;p>He describes their own contributing experience as someone who, at first, tried to do all of the hard lifting on their own, but eventually saw the benefit of group contributions which reduced burnout and task division which allowed folks to keep moving forward on their own momentum.&lt;/p>
&lt;p>He recommends that new contributors use pair programming.&lt;/p>
&lt;blockquote>
&lt;p>&lt;em>The cross pollination of approaches and two pairs of eyes on the same work can often yield a much more amplified effect than a PR comment / approval alone can afford.&lt;/em>&lt;/p>
&lt;/blockquote>
&lt;h2 id="nick-younghttpsgithubcomyoungnick">&lt;a href="https://github.com/youngnick">Nick Young&lt;/a>&lt;/h2>
&lt;p>Nick Young works at VMware as a technical lead for Contour, a CNCF ingress controller. He was active with the upstream Kubernetes project from the beginning, and eventually became the chair of the LTS working group, where he advocated user concerns. He is currently the SIG Network Gateway API subproject&amp;rsquo;s maintainer.&lt;/p>
&lt;p>His contribution path was notable in that he began working on major areas of the Kubernetes project early on, skewing his trajectory.&lt;/p>
&lt;p>He asserts the best thing a new contributor can do is to &amp;ldquo;start contributing&amp;rdquo;. Naturally, if it is relevant to their employment, that is excellent; however, investing non-work time in contributing can pay off in the long run in terms of work. He believes that new contributors, particularly those who are currently Kubernetes users, should be encouraged to participate in higher-level project discussions.&lt;/p>
&lt;blockquote>
&lt;p>&lt;em>Just being active and contributing will get you a long way. Once you&amp;rsquo;ve been active for a while, you&amp;rsquo;ll find that you&amp;rsquo;re able to answer questions, which will mean you&amp;rsquo;re asked questions, and before you know it you are an expert.&lt;/em>&lt;/p>
&lt;/blockquote>
&lt;hr>
&lt;p>If you have any recommendations/suggestions for who we should interview next, please let us know in &lt;a href="https://kubernetes.slack.com/messages/sig-contribex">#sig-contribex&lt;/a>. Your suggestions would be much appreciated. We&amp;rsquo;re thrilled to have additional folks assisting us in reaching out to even more wonderful individuals of the community.&lt;/p>
&lt;p>We&amp;rsquo;ll see you all in the next one. Everyone, till then, have a happy contributing! 👋&lt;/p></description></item></channel></rss>