Custom Resource Definition in Kubernetes

11 minutes read
31 January 2025

Kubernetes has come a long way since its initial release in 2014, beginning with key built-in resources such as Pods, Services, Deployments, and Replication Controllers. As developers started building more complex applications, they needed resources that went beyond what Kubernetes initially provided. They required features like custom workflows, advanced configurations, and support for non-standard infrastructure components. Built-in resources (e.g., Pods, Services) worked well for standard deployments but lacked the flexibility to manage these unique requirements. This created a need for more adaptable solutions.

As a result, Third-Party Resources (TPRs) were introduced in 2016 with Kubernetes 1.2 as an attempt to offer flexibility for external resources. However, TPRs fell short in data validation, functionality, and performance, leading to the introduction of Custom Resource Definitions (CRDs) in 2017 with Kubernetes 1.7.

In this article, we will discuss why developers are eager to enhance Kubernetes with Custom Resource Definitions (CRDs) and what types of challenges or workflows CRDs can address that built-in resources cannot.

By the end of this article, you will understand the critical function CRDs play in Kubernetes and why they have become an essential tool for developers looking to construct complex, flexible processes in Kubernetes.

What is a custom resource definition (CRD)?

Just as programmable driving modes in modern cars allow drivers to personalize their experiences by adjusting things like seat positions or steering sensitivity, CRDs are a powerful feature that enables Kubernetes users to extend the platform with custom resources tailored to their specific needs.

Unlike TPRs, CRDs offer advanced features, including API versioning, seamless integration with custom controllers, and scalability, making them first-class citizens in the Kubernetes ecosystem. While CRDs didn’t single-handedly transform Kubernetes, they have significantly contributed to its adaptability, giving developers the power to create resources specific to their applications’ needs. 

CRDs allow developers to model non-standard resources in Kubernetes, giving them the flexibility to manage anything from databases and backups to complex workflows and custom infrastructure components. With the help of CRDs, Kubernetes transforms from a simple tool for orchestrating containers into a highly flexible platform that can manage almost any infrastructure or application. 

CRDs can be paired with custom controllers, which use newly defined resources to automate complex tasks. This collaboration allows developers to manage applications or infrastructure declaratively, just as Kubernetes already does for its core resources.

Ultimately, CRDs enable Kubernetes to become an expandable system that can be reshaped to meet specific requirements. Developers can specify their resources, create custom objects and have Kubernetes manage them properly, reducing the burden of maintaining varied and specialized applications.

Security Considerations for CRDs

While CRDs are incredibly powerful, they come with potential security implications. Improper configurations, unvalidated data, or overly permissive access can open up vulnerabilities in your Kubernetes cluster. Some good ways to avoid these risks include:

  • Enforcing RBAC policies to restrict who can define or manage CRDs.
  • Defining strong validation schemas to ensure custom resources adhere to expected structures.
  • Securing custom controllers by running them with minimal permissions and regular updates.

To fully appreciate CRDs’ potential, it is essential to understand the relationship between Kubernetes API and the differences between native and custom resources.

Kubernetes API and native resources vs custom resources

The Kubernetes API is Kubernetes’ engine room. It handles how resources, both native and custom, are created, defined, and managed. It exposes Kubernetes resources (like pods, services, etc.) by providing a RESTful interface that allows clients to interact with its API server using HTTP requests.

So, running commands like “kubectl get pods -n e-commerce” basically uses the HTTP “get” method to pull information from all the pods in that namespace. This is useful as it uses Kubernetes resource-based URLs to represent Kubernetes resources.

Native Kubernetes resources (like deployments and pods) are usually predefined in the API and have their schemas and controllers. This means that APIs have a way of accessing and handling these native resources.

Each native resource in Kubernetes has a defined structure that includes four critical components used by Kubernetes to understand, create, and manage them. They are: 

  • ApiVersion: Specifies the version of the Kubernetes API to use for a particular resource to best interpret its schema and behavior. 
  • Kind: Defines the resource type being managed to help Kubernetes understand its role and how controllers should handle it.
  • Metadata: Contains info about the resource’s name, namespace, labels, and annotations. Metadata allows the resource to be referenced and discovered.
  • Spec: The spec defines things like configurations and requirements and gives Kubernetes insight into the desired state of the resource to be managed. Kubernetes can then use its API to track when the resource deviates from this state.

It is important to note that Kubernetes has built-in knowledge of these native resources through its predefined schemas and controllers to manage them more easily. The downside is that these schemas and controllers only cover some potential needs developers may have.

To address the growing complexity of developer needs without expanding the core Kubernetes API, CRDs were introduced. CRDs allow developers to define their own resource types, seamlessly integrating them into the Kubernetes ecosystem. This flexibility enables custom resources to feel like native Kubernetes components.

Although Custom Resources function much like native resources, they have important distinctions that affect how they are used and managed within Kubernetes. Let’s explore these differences between native and custom resources.

Native Resources vs Custom Resources

While Kubernetes API manages both, they portray differences in structure, behavior, and purpose.

Native Resources vs Custom Resources
Native Resources Custom Resources (CRDs)
Definition Predefined by Kubernetes and comes built-in with the API User-defined using Custom Resource Definition
Schema Schema is built-in with the resource and immutable by the user Schema is defined by the user when creating the CRD
Controllers Comes with default controllers provided by Kubernetes to manage the lifecycle Uses custom controllers written by the user
API Versioning Follows the Kubernetes versionng lifecycle (e.g., v1, app/v1) Uses user-defined versioning with the CRD (e.g., v1, v2)
Purpose Designed to handle common Kubernetes workloads (e.g., pods, services) Tailored to specific appication needs (e.g., Database, MLjob, certificate, ingressRoute)
Usage Scope Standardized across all Kubernetes clusters Custom to the needs of cluster/application it's defined for
Extensibiliy Limited to the Kubernetes' core functionality and capabilities Highly extensible to model unique workflows and resources
Integration Fully integrated into Kubernetes, with built-in controller and lifecycle management Seamlessly integrated into the Kubernetes API, making them function equivalent to native resources. It offers more flexibility and extensibility beyond the core Kubernetes functionality

By understanding these differences, we can get a better grasp of how CRDs function and why they empower developers to create custom resources. Let’s explore these concepts further in the next section.

How do CRDs work?

CRDs can be used to extend the functionality of Kubernetes to meet the specific needs of a developer.  Although powerful on their own, when paired with the Operator Pattern, they unlock a whole new level of automation and efficiency.

Think of operators as the “brains” behind custom resources. While CRDs define the structure of a resource, operators give them life by adding the necessary logic. Imagine you’re managing a database-tasks like scaling resources during high traffic or taking backups at regular intervals can be automated with an operator. However, it’s important to note that building an operator requires custom logic that must be carefully designed and thoroughly tested to ensure reliability. Once implemented, operators can also step in to recover systems if something goes wrong, adding significant value to your Kubernetes setup.

This kind of automation is especially important for applications that need to manage and store state consistently. With the Operator Pattern, developers can program Kubernetes to not just manage these custom resources, but to optimize them based on specific needs. Operators allow Kubernetes to handle applications as seamlessly as it handles its built-in resources.

In essence, operators take CRDs to the next level, turning them into powerful tools for automating workflows, ensuring reliability, and reducing the manual overhead of managing complex systems.

Why use CRDs?

So far, we’ve dug into CRDs, including what they are, how they work, and the different ways they can enhance your Kubernetes experience. Let’s look at why CRDs are essential in various use cases:

They automate complex workflows with custom controllers

By pairing CRDs with custom controllers, you can automate complex workflows specific to your custom resources. CRDs help create resources that fit unique developer needs. For instance, a custom VideoEncoder resource can automate scaling across regions by adding or removing pods based on demand. Controllers also handle pod failures by automatically recreating them, ensuring availability. Additionally, they manage routine tasks like backups or resource cleanup, helping to optimize performance.

They help achieve consistency across teams

Imagine a large gaming company like Sony, where different teams manage various aspects of the application, such as game development and content delivery. CRDs enable you to create standardized resource types that are reusable across all teams. For example, the game development team in one location and the content delivery team in another can both use the same CRD schema. This ensures consistency and uniform configurations, no matter where the teams are based or what components they manage.

They enhance long-term scalability and extensibility

One of the key benefits of CRDs is their ability to scale and evolve alongside your organization’s needs. As your applications grow, CRDs allow you to integrate new features, support various workloads, and adapt to market changes without overhauling existing infrastructure. Think of CRDs as a plug-and-play solution, enabling easy expansion. Whether you’re adding features to gaming services or scaling resources, CRDs simplify the process of managing and evolving your application.

Using CRDs with Mia-Platform Console

Instead of going through the hassle of creating your custom resource from scratch, Mia-Platform Console offers a streamlined approach. Within their developer-friendly interface, you can easily define, deploy, and manage infrastructure resources, integrating them smoothly with your application. Mia-Platform enables easy management of custom resources and Infrastructure-as-Code (IaC) tools like Terraform or Crossplane, all integrated directly within the Mia-Platform Console’s infrastructure management features. By defining Infrastructure Resources, you can easily manage CRDs in your Kubernetes cluster or even provision new clusters using your preferred IaC tool.

It’s easy to create and manage resources on Mia-Platform. You have two options for creating these resources: either create a new custom resource from the marketplace or create one from scratch.

To create the resource from scratch, the platform provides an easy-to-use console that indicates where you can fill in fields like apiVersion, kind, and metadata. Once your resource is set up, you can monitor, update, and version your custom resources all within the platform, allowing for quick adjustments and smooth integration with your existing Kubernetes setup. Here is what that looks like:

01

Mia-Platform console also provides pre-built templates for various types of custom resources. This saves you the effort and time of creating CRDs from scratch. Simply browse through the templates and choose one that fits your use case. Once selected, these templates can be customized according to your application’s requirements.

2

What are the next steps?

As Kubernetes evolves, there is an increasing demand for a more robust, flexible solution that allows developers to modify Kubernetes to their circumstances without feeling confined by the built-in resources. Kubernetes custom resource definitions have also proven to be effective tools within the Kubernetes ecosystem. 

They allow developers to design resources tailored to their application or software demands, expanding Kubernetes’ capability. They enable automation, scalability, and customization, making the platform more flexible, adaptable, and efficient for various workflows.

With tools like Mia-Platform, you can deploy your modern Kubernetes configurations seamlessly in a matter of minutes. Check out this white paper, which provides insight into the real-world uses and gives you an overview of Kubernetes.

New call-to-action
Back to start ↑
TABLE OF CONTENT
What is a custom resource definition (CRD)?
Kubernetes API and native resources vs custom resources
How do CRDs work?
Why use CRDs?
Using CRDs with Mia-Platform Console
What are the next steps?