Iris is currently in development. The capabilities described on this page reflect our design and roadmap for the intelligence layer.

Iris

Describe the analysis.
Get a validated pipeline.

Tell Iris what you need in plain language. It will generate a complete, validated Fabric Pattern in seconds. Validated and constrained output. No hallucinated steps. The intelligence layer between your intent and the pipeline.

How it works →
Ask Iris anything...

And that's it. A complete Pattern will be available in your Studio workspace in seconds.

AOIDataPreAnalysisPostOutput

Iris is the intelligence layer. Describe your analysis in plain language and Iris generates a validated Pattern spanning all six stages. See the full pipeline →

Seconds Target: natural language to validated Pattern
28+ tasks Full Fabric catalog: download, process, analyze
3-layer Validation design: structural, catalog, business rules
0.0 – 1.0 Confidence scoring on every generation

You know what
you need to analyze.
Wiring it up is
the problem.

A GIS analyst knows they need vegetation change detection over the Amazon basin for the last 90 days. They know which sensors matter, roughly which spectral indices to use, and what the output should look like. What they do not know (or do not want to rebuild every time) is the exact sequence of download, preprocessing, and analysis steps required to get from intent to result.

That wiring is where projects stall. Selecting the right data sources, configuring band combinations, setting up the correct preprocessing chain, ensuring spatial alignment across sensors. It is not the analysis itself that takes days; it is the preparation that precedes it.[1]USGS: The Value of Data ManagementMichener (2015): "80% of a scientist's effort is spent discovering, acquiring, documenting, transforming, and integrating data... 20% is devoted to analysis, visualization, and new discoveries."usgs.gov ↗

Iris is designed to eliminate the gap between knowing what you want and having a valid pipeline to produce it. Describe the goal. Iris maps it to the Fabric task catalog, selects appropriate sensors and parameters, and generates a Pattern that Engine can execute. The output is structured, validated, and constrained: every Pattern is checked against the task registry before it reaches you.

From intent to execution in four steps.

01

Describe your analysis goal

State what you need in plain language: "fire scar analysis for Northern California, September 2025" or "download and harmonize Sentinel-2 and Landsat for my study area." Iris accepts freeform input or guided selections, whatever gets your intent across fastest.

02

Iris maps intent to the task catalog

Iris identifies the analysis type (vegetation, fire, flood, terrain, urban, infrastructure) and selects from the Fabric task catalog. It recommends sensors, bands, preprocessing steps, and analysis operations appropriate to your goal, assembling them into a complete Pattern with correct dependencies.

03

Three-layer validation

Before Iris returns anything, the generated Pattern passes through three validation layers. Structural validation confirms the Pattern is well-formed. Task catalog validation checks every step against the official Fabric registry, rejecting invented operations and impossible dependencies. Business rule validation catches missing inputs, placeholder values, and configuration conflicts. If something doesn't pass, Iris asks for clarification rather than guessing.

04

Confidence-scored Pattern, ready to run

Every generation returns a confidence score from 0.0 to 1.0, penalized for validation warnings, missing inputs, or ambiguous requests. A high-confidence Pattern is ready to execute through Fabric Engine immediately, whether in Studio or in a Delta deployment. A lower score tells you exactly what to review before running.

The full Fabric task catalog, accessible in plain language.

Vegetation

Crop health & deforestation

NDVI time series, vegetation change detection, fire scar mapping via dNBR, Growing Degree Day accumulation. Iris selects appropriate optical sensors and preprocessing based on your study area and time range.

Water & Flood

Flood extent & water bodies

NDWI and MNDWI water body mapping, flood extent analysis, coastal change detection. SAR-optical fusion workflows when cloud cover limits optical data availability.

Fire & Hazards

Burn severity & disaster response

Pre/post fire comparison with NBR and dNBR, wildfire risk mapping, landslide susceptibility, multi-source disaster response workflows combining SAR, optical, and elevation data.

Urban

Growth & infrastructure

NDBI built-up area detection, construction monitoring, impervious surface mapping, road network change detection. Urban analysis workflows that combine optical imagery with OSM vector data.

Terrain

Elevation & slope analysis

SRTM elevation download, terrain preprocessing, slope and aspect derivation. Iris configures the correct resolution, reprojection, and grid alignment for your area of interest automatically.

Multi-source

Cross-sensor harmonization

The Patterns where Iris adds the most value: multi-step workflows that combine Sentinel-2, Landsat, SAR, elevation, and vector data with the correct preprocessing chain to make them spatially and radiometrically comparable.

Structured domain
knowledge, not
general-purpose AI.

Iris is not a chatbot with a geospatial skin. It is a structured intelligence interface built on a foundation model with comprehensive domain-specific context: the full Fabric task registry, sensor specifications, band presets, validated processing examples, and the rules that govern how geospatial operations compose correctly.

The intelligence comes from structured domain knowledge injection and a constrained output format, not from a fine-tuned model or a retrieval system searching for approximate matches. Every generation is constrained to the validated tasks in the Fabric catalog. Every dependency chain is checked against the task graph. Every output is validated before it reaches you.

This is deliberate. A system that can generate any arbitrary processing step is a system that can hallucinate. Iris operates within the boundary of what Fabric can actually execute, and within that boundary, it is precise.

From Pattern
generator to
sensor intelligence.

The foundation layer translates intent into validated pipelines. What follows builds on every Pattern generated and every observation that passes through the system.

The next phase introduces retrieval-augmented generation over a growing library of expert-reviewed Patterns. Instead of generating every pipeline from first principles, Iris will match your request against validated, proven workflows, selecting and adapting Patterns that have been reviewed and scored by domain experts. The accuracy of generated Patterns improves with every reviewed example added to the library.

Beyond retrieval, Iris evolves into a system of record for how sensors observe reality. It learns the dialect each sensor speaks: how Sentinel-2 sees vegetation differently than Landsat, how SAR interprets moisture versus optical reflectance, how atmospheric conditions affect observations under specific conditions. A compounding knowledge graph of transformation decisions; not just what was done, but why it was valid for that sensor combination under those conditions.

The long-term architecture introduces what we call structural humility. When confidence is low (because of cloud cover gaps, sensor degradation, or insufficient corroborating observations), Iris says so. A system that cannot say "I don't know" is not intelligence. Confidence and uncertainty scoring, built into the generation pipeline from the start, becomes the foundation for a system that knows the limits of what its observations can support.

The part of the
system that perceives.

Iris is not an acronym. It is the iris: the part of the eye that adapts to conditions, controlling how much light enters. The part of the visual system that perceives.

Fabric is the connective tissue that harmonizes raw sensor data into composable, analysis-ready formats. Iris is the intelligence that perceives through it, translating human intent into the language Fabric speaks and, eventually, learning from every observation that passes through the system.

In the context of Observational Grammar, M33's foundational framework for how sensors form a language of evidence about reality, Fabric handles syntax and semantics. Iris handles pragmatics: context, confidence, and learning. The layer where independent sensor readings stop being claims and start becoming evidence.