⦿

The Anti-Cloud for Liquid Compute Futures

Earn as a provider. Trade as a market maker. Build as a creator.

Overview

Alien is a decentralized compute platform that connects GPU providers with users who need computational resources. Our distributed architecture enables efficient resource allocation and instant settlement through blockchain technology.

Computational power should be accessible and liquid, available to any user or organization that needs it. The Alien platform removes traditional barriers by creating a marketplace where compute resources flow efficiently based on supply and demand.

The platform integrates five core components: a compute marketplace, zero lock-in infrastructure, adapter framework, custom development tools, and dual-token economics. This architecture enables seamless access to distributed GPU resources while ensuring fair compensation for providers.

Compute Marketplace

Transform idle GPU capacity into revenue. Connect to our global compute marketplace and start earning immediately.

Providers connect to the network through our launcher system. Whether you operate gaming hardware or enterprise servers, the platform enables monetization of unused compute cycles. The implementation, documented in alien/launcher/cli/main.py, provides straightforward onboarding for all provider types.

CLI Example

$ alien start
"Maximize GPU utilization by joining our compute network."Alien Documentation

Network Statistics

[Live Data]

ACU Tokens Earned

[Live Data]

AVL Rewards Accumulated

Zero Lock-In Infrastructure

Access compute resources on demand. No long-term contracts or minimum commitments required.

The settlement router, implemented in Python within our control plane, ensures accurate accounting of all compute usage. The system employs dual-signature cryptography—primary Ed25519 with embedded public key, secondary PQ-style derived from SHAKE-256—providing cryptographic verification that meets strict audit requirements.

Our TWAP (Time-Weighted Average Price) implementation ensures fair pricing, as shown in control_plane/settlement.py:

1 2 3 4 5 6 7 8 9 10 11 12
# From control_plane/settlement.py total_minutes = sum( row["minutes_delta_scm_micro"] for row in slices ) numerator = sum( row["minutes_delta_scm_micro"] * row["priceindex_micro_usd"] for row in slices ) burn_micro_acu = math.ceil( numerator / self.mint_price_micro_usd )

This efficient algorithm ensures providers can liquidate their compute contributions instantly, while consumers access resources without traditional enterprise contract overhead. The absence of lock-in periods provides flexibility and commitment to open access computing.

Adapter Framework

Five production-ready adapters transform raw compute into specialized capabilities. Each adapter provides an optimized engineering solution.

Training Adapter

training.py

Distributed deep learning with DDP strategy. Handles multi-GPU synchronization, gradient accumulation, and checkpoint management.

PyTorch/TensorFlowMulti-GPUFault Tolerant
1
2
3
4
5
6
ResourceProfile( num_gpus=job_spec.get("num_gpus", 1), min_vram_gb=job_spec.get("min_vram_gb", 40), interconnect=("nvlink",), scm_minutes=60 )

Inference Adapter

inference.py

High-throughput model serving with automatic batching, caching, and load balancing across heterogeneous hardware.

Auto-batchingModel CachingLoad Balancing
1
2
3
4
5
6
ExecutionPlan( strategy="service", expose_service=True, service_ports=[{"port": 8080}], autoscaling={"min": 1, "max": 10} )

Rendering Adapter

rendering.py

Graphics and visualization workloads. Supports ray tracing, path tracing, and real-time rendering pipelines.

Ray TracingPath TracingReal-time
1
2
3
4
5
profile.features = ( "cuda>=12.1", "optix", "opengl" )

Quantization Adapter

quantization.py

Model optimization through precision reduction. Supports INT8, FP16, and mixed-precision quantization strategies.

INT8/FP16Mixed PrecisionONNX Export
1
2
3
4
5
metadata={ "quantization_bits": 8, "calibration_samples": 1000, "optimization_level": "O2" }

Federated Adapter

alien/federated_addon/adapter.py

Privacy-preserving distributed learning with differential privacy guarantees and secure aggregation.

Differential PrivacySecure AggregationCommittee Consensus
1
2
3
4
5
env = { "FED_DP_EPSILON": str(3.0), "FED_DP_DELTA": str(1e-5), "FED_COMMITTEE_K": str(3) }

Custom Adapter Development

Build. Deploy. Scale. The adapter registry enables workload-specific orchestration patterns. From distributed training to auto-scaling inference, each adapter defines how compute resources transform into capabilities.

The adapter protocol, defined in alien/adapters/base.py, bridges workload specifications to orchestration plans with over 30 configurable parameters. From training to inference, rendering to quantization.

alien/adapters/inference.pyExample Implementation
from alien.adapters.base import Adapter, ExecutionPlan, ResourceProfile
from alien.adapters.loader import register_adapter

class InferenceAdapter(Adapter):
    """Service-oriented execution with auto-scaling."""
    
    def prepare(self, job_spec: Dict[str, object]) -> tuple[ResourceProfile, ExecutionPlan]:
        profile = ResourceProfile(
            num_gpus=int(job_spec.get("num_gpus", 1)),
            min_vram_gb=int(job_spec.get("min_vram_gb", 16)),
            interconnect=tuple(job_spec.get("interconnect", ("pcie",))),
            scm_minutes=int(job_spec.get("scm_minutes", 60)),
            features=tuple(job_spec.get("features", ()))
        )
        
        plan = ExecutionPlan(
            image=job_spec.get("image", "ghcr.io/alien/inference:latest"),
            command=tuple(job_spec.get("command", ("python", "-m", "server"))),
            env={k: str(v) for k, v in job_spec.get("env", {}).items()},
            volumes={},
            strategy="service",
            rendezvous={"type": "none"},
            io={"mode": "stream"},
            replicas=int(job_spec.get("replicas", 1)),
            service_type="ClusterIP",
            readiness_probe={"httpGet": {"path": "/health", "port": 8080}},
            liveness_probe={"httpGet": {"path": "/health", "port": 8080}},
            autoscaling=job_spec.get("autoscaling"),
            restart_policy="Always"
        )
        return profile, plan
    
    def map_metrics(self, raw: Dict[str, object]) -> Dict[str, object]:
        return {"latency_p95_ms": raw.get("latency_p95_ms", 0),
                "throughput_qps": raw.get("qps", 0)}

# Register your adapter globally
register_adapter("my-inference", InferenceAdapter)
5
Core Adapters
30+
Plan Fields
Custom Adapters

Built-in Adapters

  • TrainingAdapter: DDP, multi-GPU
  • InferenceAdapter: Service-oriented
  • RenderingAdapter: NVENC, tiling
  • QuantizationAdapter: PTQ/QAT
  • Your custom adapter via registry

Advanced Capabilities

  • Kubernetes orchestration
  • Auto-scaling & health checks
  • Custom metrics mapping
  • Hardware feature detection
  • Multi-region placement

Revenue Model for Adapter Creators

The Alien platform empowers developers to create custom adapters that extend the platform's capabilities. When you register a custom adapter through the global registry system, you're not just contributing code—you're enabling new computational markets.

How Adapter Creators Participate

  • Build specialized execution patterns for unique workloads
  • Optimize resource utilization for specific use cases
  • Contribute to the growing library of computational primitives
  • Enable new markets for specialized compute tasks

Economic Opportunities

  • Your adapter becomes available in the global registry
  • Providers worldwide can use your adapter to execute workloads
  • You define resource requirements and execution patterns
  • The system handles scheduling, metering, and settlement
"The adapter registry system (register_adapter) allows seamless integration of custom adapters, making them immediately available to the global compute network. Every adapter creator becomes a stakeholder in the decentralized compute economy."

Token Economics

The dual-token architecture balances computational supply and demand through complementary economic forces. ACU governs settlement, AVL incentivizes availability—both controlled by sophisticated governance mechanisms.

ACU TokenAVL Token
Supply1,000,000,000Fixed supply cap (S_MAX)ConfigurableHard cap set at deployment (MAX_SUPPLY)
PurposeSettlement currency for compute transactionsAvailability incentives & provider staking
MechanismDEFLATIONARYBurned via ConversionRouter when AVL is mintedCONTROLLEDMinted via Merkle distribution with κ parameter
GovernanceGovernableSingle governor addressAccessControlRole-based permissions
ContractACUToken.solAvailabilityToken.sol
"Unlike traditional inflationary models, AVL issuance is mathematically bounded by actual network capacity and treasury sustainability."Alien Platform Documentation

AVL Governance & Control Mechanisms

The Kappa (κ) Parameter

The core control mechanism governing AVL availability rewards. Currently set at $0.02 per ACU.

// Availability Reward Formula
Δ_AR = ceil(qMicro × κ / MintPrice)
  • Default: 20,000 micro-USD per ACU
  • Adjustable via timelock governance
  • Protected by treasury runway checks

Role-Based Access Control

Three critical roles govern AVL token operations with distinct permissions.

MINTER_ROLE
Mint/burn tokens, enforce MAX_SUPPLY cap
PAUSER_ROLE
Emergency pause/unpause operations
SLASHER_ROLE
Penalize misbehaving providers

Daily AVL Distribution Algorithm

30%
Entity Cap
Maximum rewards per entity to prevent centralization
10%
Probe Threshold
Providers failing >10% probes receive zero rewards
180
Runway Days
Minimum treasury balance for AVL minting
// Python Distribution Logic
eligible_idle = max(0, committed_scm - delivered_scm)
if probe_failed_pct > 10: weight = 0
if entity_total > cap: scale_down_proportionally()
merkle_root = build_tree(provider_payouts)

Complete Token Lifecycle

ACU Flow
User Payment
ACU → ConversionRouter
Settlement Complete
ACU Burned Forever
AVL Flow
Idle GPU Capacity
κ × Budget Calculation
Merkle Distribution
AVL Minted (≤ MAX_SUPPLY)
Provider Staking Loop
Stake AVLProvide CapacityEarn RewardsRisk: Slashing

Get Started

Start monetizing your GPU capacity or access on-demand compute resources through the Alien platform. Join thousands of providers and users building the future of distributed computing.

Our platform provides reliable, cost-effective compute infrastructure with instant settlement and transparent pricing. Get started today with our comprehensive documentation and SDK.