top of page

Cloud-Native Meets Carbon Intelligence: Multi-Cluster Sustainability with Liqo and Karmada

  • 1 day ago
  • 4 min read

By Ananta Cloud Engineering Team | September 11, 2025



A lush green field with wind turbines in the background, connected to a cloud data center visualized as a translucent overlay. The @Liqo and @Karmada logos are subtly integrated into the landscape, under a bright sunny day, representing eco-friendly cloud computing.

The cloud was built for scale. But at Ananta Cloud, we believe it should also be built for sustainability.


As enterprises shift toward multi-cluster Kubernetes architectures across hybrid and multi-cloud environments, one crucial factor has often been overlooked in workload scheduling decisions: carbon intensity.


This post explores how to implement carbon-aware Kubernetes scheduling using two powerful open-source tools: Liqo and Karmada—and how Ananta Cloud is pioneering these green ops movement.


The Problem: Compute Without Conscience

Kubernetes is excellent at managing efficiency at scale—optimizing resources, automating failover, and distributing workloads. But by default, Kubernetes does not consider the environmental impact of where workloads run.


Traditional Scheduling Logic:

  • Node CPU/memory availability

  • Pod/node affinities

  • Taints and tolerations

  • Cost or availability zone


Missing piece: Where is the power coming from? Does that node run on coal, solar, hydro, or wind?

💡 Fun Fact: Running a large ML training workload in a coal-powered data center can emit hundreds of kilograms of CO₂, while the same job in a renewable-powered region could be near-zero emissions.

Solution Overview: Building a Carbon-Aware Multi-Cluster Kubernetes Stack

We combine:

  • Liqo – to federate clusters and allow transparent offloading

  • Karmada – to orchestrate workload placement across clusters

  • Carbon Intensity APIs – to inform real-time, data-driven decisions


Component Deep Dive

1️⃣ Liqo: Dynamic Kubernetes Federation via Virtual Nodes

What it does: Liqo connects Kubernetes clusters via peer-to-peer federation, allowing one cluster to “consume” resources from another. It introduces virtual nodes in the local cluster that map to real nodes in a remote cluster.


How it helps:

  • Enables offloading workloads to clusters powered by clean energy

  • Supports hybrid (on-prem + cloud) and multi-cloud architectures

  • Works transparently with existing K8s workloads


Example: You run a data pipeline in Cluster A (coal-powered) during the day. At night, Cluster B (solar-powered, cheaper) becomes optimal. Liqo offloads the job without code changes.


Key Features

  • API compatibility with standard Kubernetes

  • Support for persistent volumes

  • Cross-cluster network peering and DNS resolution

  • Fine-grained offloading policies


2️⃣ Karmada: Multi-Cluster Orchestration with Declarative Policies

What it does: Karmada acts as a control plane across multiple Kubernetes clusters, enabling you to deploy workloads once and have them automatically propagated based on customizable rules.


How it helps:

  • Centralized workload placement

  • Cluster health and availability monitoring

  • Intelligent resource propagation via PropagationPolicy and OverridePolicy


Key Features

  • Failover and rescheduling across clusters

  • Integration with GitOps pipelines

  • Extensible scheduling logic (can be carbon-aware!)


3️⃣ Carbon Intensity APIs: External Sustainability Signals

You can integrate real-time and forecasted carbon data using:

  • ElectricityMap

  • WattTime

  • Tomorrow.io


These APIs give access to:

  • Real-time carbon intensity (gCO2/kWh)

  • Forecasts for the next 24–48 hours

  • Emissions by region/grid operator


Architecture Overview

High-Level Workflow

  1. Carbon-Aware Controller (custom) fetches current carbon intensity for each cluster region.

  2. It assigns a carbon score to each cluster.

  3. The controller updates Karmada's PropagationPolicy with cluster preferences based on the lowest score.

  4. If needed, Liqo handles transparent offloading to the selected foreign cluster.

sequenceDiagram
    participant Dev as DevOps Engineer
    participant Karmada as Karmada Control Plane
    participant CarbonAPI as Carbon Intensity API
    participant Liqo as Liqo Cluster Peer
    participant Workload as Kubernetes Workload

    Dev->>Karmada: Define multi-cluster deployment
    Karmada->>CarbonAPI: Fetch carbon data
    CarbonAPI-->>Karmada: Return carbon intensity scores
    Karmada->>Karmada: Evaluate best cluster (lowest CO₂)
    Karmada->>Liqo: Offload workload to clean cluster
    Liqo->>Workload: Run workload in green region

Real-World Use Case Scenarios

01 - AI/ML Training Jobs

  • Challenge: High GPU workloads have huge carbon impact.

  • Solution: Schedule model training in regions with lowest emissions during off-peak times.


02 - Enterprise SaaS

  • Challenge: Need to meet ESG (Environmental, Social, Governance) goals.

  • Solution: Report CO₂ savings with audit logs of carbon-aware workload decisions.


03 - Global CDN Services

  • Challenge: Geo-distributed services have region-based redundancy.

  • Solution: When latency difference is minimal, prefer green-powered clusters.


Implementation Guide

Setup Liqo Across Clusters

liqoctl install kind --cluster-name cluster-a
liqoctl peer cluster-b --auth-mode token

Deploy Karmada and Join Clusters

# Install Karmada
curl -sSL https://raw.githubusercontent.com/karmada-io/karmada/main/hack/deploy/karmada.sh | bash

# Join clusters
kubectl karmadactl join cluster-a --cluster-kubeconfig=kubeconfig-a
kubectl karmadactl join cluster-b --cluster-kubeconfig=kubeconfig-b

Create a Custom Carbon Score Controller

Use Kubernetes CRDs or an admission controller that:

  • Queries carbon APIs hourly

  • Updates PropagationPolicy clusterAffinity weights

  • Adds carbon score as a label to clusters (e.g., co2-score=low)

Define a PropagationPolicy Example

apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: carbon-aware-policy
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: green-service
  placement:
    clusterAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 10
          preference:
            matchExpressions:
              - key: co2-score
                operator: In
                values:
                  - low

Monitoring and Observability

Use Prometheus + Grafana to visualize:

  • CO₂ intensity per cluster

  • Carbon savings over time

  • Workload migration logs


Add custom Grafana panels using data from:

  • Carbon APIs

  • Karmada logs

  • Liqo virtual node usage


Challenges and Considerations

Challenge

Solution

Data latency in carbon APIs

Use forecasted data and moving averages

SLA impact from offloading

Set fallback logic in Karmada policies

Security concerns in cluster federation

Use Liqo’s RBAC and TLS options

Multi-cloud cost trade-offs

Integrate with cost optimization tools (e.g., Kubecost)

Summary: Why This Matters

Implementing carbon-aware Kubernetes scheduling:

  • Helps enterprises meet net-zero goals

  • Optimizes for performance, cost, and carbon

  • Turns cloud-native infrastructure into a climate-positive asset


Marginal CO2 Emissions in Eastern India — https://app.electricitymaps.com
Marginal CO2 Emissions in Eastern India — https://app.electricitymaps.com

At Ananta Cloud, we believe that innovation must be sustainable. Leveraging Liqo and Karmada, we’re bringing a new dimension to cloud orchestration: one where code meets conscience.


Ready to take your Kubernetes strategy to the next level and reduce your carbon footprint?


Partner with Ananta Cloud to deploy a carbon-aware CI/CD pipeline tailored to your business.



Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
average rating is 4 out of 5, based on 150 votes, Recommend it

Stay ahead with the latest insights delivered right to you.

  • Straightforward DevOps insights

  • Professional advice you can trust

  • Cutting-edge trends in IaC, automation, and DevOps

  • Proven best practices from the field

bottom of page