top of page

The AFI Index Platform
The Engine That Quantifies AI Adoption

The AI Friction Index (AFI) is the first patented enterprise SaaS solution built to solve the hidden cost of AI adoption. We eliminate organizational friction by providing the Tailored Scaling Constant (K)—the auditable financial metric that determines your true cost of scaling AI. Our recent white paper was recognized by Columbia University as a best practice in the emerging field of AI Adoption. 

Academic Recognition: Columbia University recognized our AFI engine as a best practice in AI adoption and leadership development at the 2025 Coaching Conference. 

The Problem: Friction Is Cost
 
The industry understands that human factors limit AI success, but until now, no tool could quantify that impact. When Machine Autonomy deviates from Organizational Trust and AI Competencies, it creates organizational friction. AFI's engine transforms this friction into a financial penalty because every point of deviation requires costly mitigation: increased training budgets, heavier governance oversight, or slow, manual verification of automated outputs.

The AI Innovation Frontier

​

In the age of AI, executives will need to: lead machines, lead people that build machines, and lead organizations that adopt AI. Success will be determined based on how well organizations are able to balance Machine Autonomy with Organizational Trust and AI competencies.

​

The Machine Leadership Model represents breakthrough research in the field of AI adoption. When the three elements are aligned (A=T=C), AI adoption for a tool, platform, or system proceeds at the baseline computational cost, expressed as O(n) or through measures such as FLOPs and latency. However, when these variables are out of alignment, organizational friction occurs. This results in an AI Adoption penalty, which is quantified by our SaaS platform using the AFI Index.

​

CIOs and CHROs intuitively understand this adoption penalty. It arises when additional costs are needed to address adoption issues such as trust programs, skill building, monitoring, and privacy. These are the hidden costs that are not fully captured by traditional AI costing methods, but often result in significant bottlenecks for realizing the full benefits of an AI use case. 

​​

Our software helps organizations find the equilibrium between Autonomy, Organizational Trust, and AI Competencies. When (A=T=C), AI technologies fall on the blue diagonal line listed on Chart 1. This represents the AI Innovation Frontier, where organizations realize the maximum benefits of an AI tool given its degree of autonomy combined with the enterprise strategy and workforce skillsets. 

​

Chart 1: Machine Leadership Model

Machine Leadership Model.jpeg
Our Proprietary, Patented Methodology
 
The AFI engine is built on our patented Machine Leadership Model and Moderating Function, which fuses three critical, weighted data streams to generate your organization's unique scaling constant (K).
 
1. Data Fusion: The Three Variables
 
The engine’s precision comes from linking data that traditionally resides in silos:
Machine Autonomy, Organizational Trust, and AI Competencies.

Scaling Constant (k)

2. The K Constant Derivation 

Our core patented function calculates your financial cost sensitivity. We derive your unique K constant by dividing the organization's Quantified Penalty—your annual AI readiness budget—by the calculated Total Weighted Deviation (Dw). This calculated K factor is the single most important metric for your AI strategy, as it tells you the exact dollar cost your organization incurs for every unit of organizational friction.
​
Actionable Output: The System Penalty Report
​

The AFI platform delivers its analysis through real-time dashboards and reports, focusing on the ultimate metric: AI system penalty contribution for each deployed and planned technology - where are the pain points in adoption?

Machine Leadership Model_Home Page

Unrivaled Security & Integration

The AFI platform is built as a secure, multi-tenant SaaS application designed for seamless enterprise adoption.

 

  • Secure Integration: Utilize OAuth2/SCIM to ensure secure, read-only integration with major HRIS (Workday, SuccessFactors) and ITAM (ServiceNow, Splunk) systems.​

​

  • Data Integrity: All proprietary data (e.g., Leadership Capacity Scores) are stored securely and only used to feed the K calculation engine and the Benchmarking Database.

​

  • Real-Time Diagnostics: Webhook-based synchronization ensures your System Penalty Report is updated immediately following any change in license counts, training completion, or leadership assessment results.​​

Design Elements

Machine

Autonomy

Data Source:

CMDB, ITAM, Others

​

Score Contribution:

Measures the current automation level of each tool, weighted by Utilization (Ui).

Organizational

Trust

Data Source:

Proprietary

​

Score Contribution:

Measures the enterprise-wide willingness to rely on autonomous systems.

Composite Competencies

Data Source:

HRIS data + LLM

 

Score Contribution:

Measures the human skill and leadership capacity to interact with and lead AI.

AI Leadership Assessments

Data Source:

Proprietary

​​

​Score Contribution:

Measures the cognitive processes and leadership traits important to AI.

Outputs

System

Penalty

Purpose:

The precise dollar cost of friction attributed to a single technology stack, alongside the enterprise average.

​

Prioritization. Use the Pi value to justify automation or training budgets for the highest-cost systems first to drive ROI.

Real-Time

Imbalance Trend

Purpose:

Displays the historical trend of Dw against the current K factor, allowing for efficient trends analysis. 

​

Governance. Track whether remediation efforts (training, automation) are successfully reducing the organizational friction score.

LLM Strategic Recommendations

Purpose:

Generative LLM insights that translate the mathematical findings into specific action plans.

​

Example: The $440K Azure AD penalty requires increasing 

the autonomy score from 2.0 to 4.0 by delegating more tasks to the machine.

Global

Benchmarking

Purpose:

Provide a guidepost for CIOs and CHROs to track their AI adoption trends using multiple metrics.

​​

​Best Practices:

Utilize our statistical packages to identify best practice outliers that can be easily adopted within your organization's culture.

bottom of page