Vertics Documentation
Welcome to the Vertics platform documentation. This guide covers the ratings engine, API, Python client, and Superset dashboards.
What is Vertics?
Vertics is an independent natural capital ratings engine. It combines satellite earth observation with ground-truth field data to produce composite Landscape Quality Index (LQI) scores — transparent, auditable, and market-ready.
The platform serves sellers (landowners, conservation bodies, NNG originators) who need to certify their natural capital assets, and buyers (developers, ESG funds, marketplaces) who need independent due diligence.
Core Components
| Component | URL | Purpose |
|---|---|---|
| Ratings API | api.vertics.app | FastAPI backend — sites, observations, ratings, table uploads |
| Superset | dashboards.vertics.app | Interactive dashboards, SQL Lab, map visualisations |
| Scoring Engine | app/engine/ | Pure Python — normalisation, composite scoring, coherence analysis |
| PostGIS | localhost:5432 | Geospatial data storage — sites, observations, ratings, user tables |
Quickstart
Get from zero to a rated site in five minutes.
1. Install the Python client
pip install httpx pandas geopandas
Then copy vertics_client.py into your project.
2. Upload site data
from vertics_client import VerticsClient
client = VerticsClient(
base_url="https://api.vertics.app",
api_key="vk_your_key_here",
)
# Upload field observations as a table
import pandas as pd
df = pd.read_csv("soil_samples.csv")
client.upload_dataframe(df, "soil_samples")
# Upload a site boundary as GeoJSON
import geopandas as gpd
gdf = gpd.read_file("site_boundary.geojson")
client.upload_geodataframe(gdf, "my_site_boundary")
3. Compute a rating
import httpx
response = httpx.post(
"https://api.vertics.app/v1/ratings/compute",
params={"api_key": "vk_your_key"},
json={
"site_metrics": {"evi": 0.529, "basal_cover": 0.681, ...},
"reference_baselines": {"evi": 1.0, "basal_cover": 1.0, ...},
"ecosystem_type": "semi_arid_rangeland",
}
)
rating = response.json()
print(f"LQI: {rating['lqi']}, Grade: {rating['grade']}")
4. View in Superset
Your uploaded tables are immediately available at dashboards.vertics.app under SQL Lab → select the user_data schema.
Architecture
vertics.app → Landing page (static)
api.vertics.app → FastAPI (:8000)
dashboards.vertics.app → Superset (:8088)
┌──────────────────────────────────────┐
│ Nginx (routing) │
└───────┬──────────────┬───────────────┘
│ │
┌──────▼──────┐ ┌─────▼──────┐
│ FastAPI │ │ Superset │
│ :8000 │ │ :8088 │
└──────┬──────┘ └─────┬──────┘
│ │
┌──────▼──────────────▼───────┐
│ PostGIS (ardscan DB) │
│ public.* user_data.* │
└─────────────────────────────┘
The scoring engine (app/engine/) is a pure Python library with no web dependencies. It can be called from the API, the CLI, a notebook, or a background worker.
Rating Methodology
Every Vertics rating follows the same transparent, repeatable pipeline.
Five Ecological Dimensions
| Dimension | Code | Sub-metrics |
|---|---|---|
| Vegetation & Habitat Structure | vhs | Basal cover, cover stability, vegetation diversity, patchiness, EVI |
| Soil Health & Functionality | shp | Infiltration, organic matter, soil stability |
| Plant Richness, Diversity & Rarity | prdr | Species richness, native species success, taxonomic distinctiveness |
| Mammal Richness, Diversity & Rarity | mrdr | Species richness, rarity weight, Shannon diversity |
| Bird Richness, Diversity & Rarity | brdr | Species richness, rarity weight, Shannon diversity |
Scoring Pipeline
- Normalise: Each sub-metric is divided by its reference baseline value for the ecosystem type. Ratios exceeding 1.0 are capped at 1.0.
- Aggregate: Sub-metrics within each dimension are averaged (unweighted basket).
- Composite: The LQI is the unweighted mean of all 5 dimensional scores.
- Grade: The LQI maps to a letter grade (A+ to D).
- Coherence: Cross-dimensional tensions are analysed.
Coherence Analysis
The coherence analysis is Vertics' key differentiator. It examines whether ecologically related metrics deviate from expected successional patterns — exposing whether a headline LQI score masks structural fragility.
Diagnostic Checks
| Tension | Signal | What It Means |
|---|---|---|
| Megaherbivore Paradox | MRDR ≫ VHS | Faunal abundance unsupported by vegetation structure — carrying capacity risk |
| Hydro-Carbon Decoupling | Organic matter ≫ Infiltration | Soil compaction preventing water reaching the root zone — artificial drought |
| Structural Homogenisation | Native success ≫ Veg diversity | Aggressive indigenous species creating monoculture — habitat simplification |
| Avifaunal Signal | MRDR ≫ BRDR | Birds (structure-dependent) confirm vertical habitat loss |
Flags are classified as critical, warning, or info based on the magnitude of divergence (threshold: 20% gap between related dimensions).
Grade Scale
| Grade | LQI Range | Description |
|---|---|---|
| A+ | ≥ 0.95 | Exceptional — self-sustaining, deeply adaptive |
| A | 0.90–0.94 | Adaptive — deep ecological complexity |
| A- | 0.85–0.89 | Adaptive — strong with minor gaps |
| B+ | 0.80–0.84 | Buoyant — functioning, limited intervention needed |
| B | 0.75–0.79 | Buoyant — functioning with moderate gaps |
| B- | 0.70–0.74 | Buoyant — functional but strained |
| C+ | 0.60–0.69 | Compromised — significant intervention needed |
| C | 0.50–0.59 | Compromised — multiple ecological failures |
| C- | 0.40–0.49 | Compromised — severe degradation |
| D | < 0.40 | Degraded — fundamental ecological collapse |
API — Sites
A site is the asset being rated. It has a polygon boundary stored in PostGIS.
List sites
GET /v1/sites
Create a site
POST /v1/sites
Content-Type: application/json
{
"name": "Borana Conservancy",
"ecosystem_type": "semi_arid_rangeland",
"boundary": { /* GeoJSON Polygon */ }
}
Get site detail
GET /v1/sites/{site_id}
API — Observations
Observations are data points — satellite-derived or field-collected — belonging to a site.
Ingest an observation
POST /v1/observations/sites/{site_id}
Content-Type: application/json
{
"source": "sentinel2",
"observed_at": "2026-03-01T00:00:00Z",
"metrics": {
"evi": 0.42,
"ndvi": 0.65,
"cloud_pct": 8
}
}
Supported source types: sentinel2, landsat, sentinel1_sar, dem, field_soil, field_species, field_habitat, field_photo, drone, external.
API — Ratings
Compute a rating (direct engine call)
POST /v1/ratings/compute
Content-Type: application/json
{
"site_metrics": {
"basal_cover": 0.681,
"evi": 0.529,
// ... all 17 normalised sub-metrics
},
"reference_baselines": {
"basal_cover": 1.0,
"evi": 1.0
},
"ecosystem_type": "semi_arid_rangeland"
}
Response
{
"lqi": 0.8047,
"grade": "B+",
"dimensional_scores": {
"vhs": 0.66, "shp": 0.686,
"prdr": 0.905, "mrdr": 0.948, "brdr": 0.824
},
"coherence_flags": [ ... ],
"coherence_narrative": "..."
}
API — Table Uploads
Upload pandas DataFrames and GeoDataFrames directly into PostGIS, making them instantly available in Superset.
Upload CSV
POST /v1/tables/csv?api_key=vk_...
Content-Type: multipart/form-data
file: field_data.csv
table_name: field_observations
schema_name: user_data
if_exists: fail | replace | append
Upload GeoJSON
POST /v1/tables/geojson?api_key=vk_...
// Same form fields, plus:
geometry_column: geometry
target_srid: 4326
Upload Parquet / GeoParquet
POST /v1/tables/parquet?api_key=vk_...
List tables
GET /v1/tables/tables?api_key=vk_...&schema_name=user_data
Drop table
DELETE /v1/tables/tables/{name}?api_key=vk_... (admin only)
user_data schema by default. This keeps user data isolated from Vertics internal tables in public.Python Client Library
The VerticsClient wraps the API for use in notebooks, scripts, and pipelines.
Setup
pip install httpx pandas geopandas
Copy vertics_client.py into your project or add it to your PYTHONPATH.
Usage
from vertics_client import VerticsClient
client = VerticsClient(
base_url="https://api.vertics.app",
api_key="vk_your_key",
)
# Upload a DataFrame
client.upload_dataframe(df, "soil_samples")
# Upload a GeoDataFrame (becomes PostGIS geometry)
client.upload_geodataframe(gdf, "survey_plots")
# Upload files directly
client.upload_csv("data.csv", "my_table")
client.upload_geojson("boundary.geojson", "boundary")
client.upload_parquet("archive.parquet", "archive")
# Replace existing table
client.upload_dataframe(df, "soil_samples", if_exists="replace")
# Append rows
client.upload_dataframe(new_rows, "soil_samples", if_exists="append")
# List and delete
client.list_tables()
client.drop_table("old_data")
Superset Guide
Apache Superset at dashboards.vertics.app provides interactive dashboards, SQL Lab, and map visualisations.
Accessing uploaded tables
- Go to SQL Lab
- Select the
ardscandatabase - Select the
user_dataschema - Your uploaded tables appear in the table list
Registering a Dataset
- Go to Datasets → + Dataset
- Select database, schema (
user_data), and table - Save — it's now available for charts and dashboards
Map visualisations
For tables with PostGIS geometry columns (uploaded via /geojson or GeoParquet):
- Create a new chart from your geo dataset
- Select a deck.gl chart type (Scatter, Polygon, etc.)
- Map the geometry column to the spatial field
Useful views
Two pre-built SQL views are available in the public schema:
| View | Description |
|---|---|
latest_ratings | Most recent completed rating per site — LQI, grade, area, centroid |
site_observation_summary | Observation counts, date ranges, satellite vs field split per site |