The missing ingredient in most digital twins

Digital twins can’t deliver intelligence without semantics.

They need to understand not just where things are in 3D space, but what they are and how they relate. Without that layer of meaning, a twin is just a pretty picture — visually rich, but functionally shallow.

In most cases, the barrier isn’t data collection. It’s structure. IoT systems, BIM models, and asset management databases all exist in silos, each with its own data model and naming conventions. Translating these disparate inputs into a single coherent semantic model where the AI can understand that this chiller feeds that AHU, which serves these zones, has been the hardest part of making digital twins intelligent.

 

The bottleneck of semantics at scale

Traditionally, semantic relationships are built manually. Tags, labels, and identifiers are painstakingly applied by engineers or integrators, defining how systems connect and interact. This works for a single building, but not for a portfolio of hundreds.

Manual structuring doesn’t scale.

Every new site means re-tagging thousands of assets, re-mapping relationships, and re-integrating data sources. The result: brittle models, slow deployments, and AI systems that can’t generalize.

 

Where OpenUSD changes the game

OpenUSD (Universal Scene Description) solves one of the biggest challenges in the digital twin space: maintaining both geometry and meaning within a single, standardized framework.

Originally developed by Pixar and now backed by Nvidia, Apple, and Autodesk, OpenUSD provides a 3D data model that holds both spatial and semantic information natively. It allows geometry, metadata, and relationships to coexist within a unified schema — meaning that “an AHU serving a specific zone” is not just geometry, but a meaningful, queryable relationship.

Because OpenUSD is extensible, developers can define new schemas for domain-specific data (like HVAC or energy systems) while maintaining interoperability with visualization and simulation platforms such as NVIDIA Omniverse.

In other words, OpenUSD bridges building intelligence (graph) with GPU-native simulation and visualization, connecting two worlds that have historically operated in isolation.

 

How Akila operationalizes semantics with OpenUSD

At Akila, we’ve turned this architecture into a working system.

Our proprietary graph database encodes the relationships between building systems: how equipment connects, where energy flows, and how faults propagate. This forms the semantic backbone of the digital twin.

OpenUSD then acts as the universal expression layer, transforming that semantic graph into a 3D environment that’s readable by visualization engines, AI agents, and simulation tools alike.

For example:

  • The graph database understands that “Pump A feeds Chiller B,” and “Chiller B conditions Zone C.”
  • OpenUSD represents that same relationship spatially and semantically, linking geometry, telemetry, and metadata in one structure.
  • AI and simulation modules can now operate on that data directly, querying relationships, running optimizations, and updating the twin in real time.

The result: a digital twin that’s not just visual, but contextually intelligent, capable of reasoning about cause, effect, and system interactions.

 

Semantic automation at scale

With this architecture, Akila can apply semantic structure automatically across entire portfolios.

Instead of manually tagging each site, our AI-assisted ingestion process maps incoming BIM and IoT data into the existing semantic graph, then generates a synchronized OpenUSD scene from it.

That means:

  • Rapid onboarding for new buildings
  • Consistent data semantics across geographies and vendors
  • Scalable, AI-ready twins that maintain interoperability from model to simulation to operations

This enables simulation, analytics, and automation that are fast, interoperable, and no longer dependent on bespoke integrations, a key step toward self-learning, self-optimizing buildings.

 

Why this matters

By combining semantic intelligence (graph database) with spatial interoperability (OpenUSD), Akila is creating a foundation for AI that can truly understand buildings, not as isolated systems, but as interconnected organisms.

This fusion unlocks:

  • Context-aware AI reasoning (understanding relationships, dependencies, and priorities)
  • Cross-domain simulation (running physics and energy models directly on the same data structure)
  • Portfolio-level automation (applying the same semantic logic across thousands of assets)

This is how we make digital twins truly intelligent. Not just unifying data, but giving each twin a semantic backbone that makes sense of it.