top of page
.png)

Publications


2026: The Liquidity Event Horizon and the Great AI Interplay
by Didier Vila, PhD – Founder and MD of Alpha Matica. The AI economy is no longer a clean stack—it’s a live, colliding system. The "five-layer cake" is built, but the layers are still shifting under enormous pressure. As we enter the second half of 2026, we stand at the Liquidity Event Horizon: the moment when massive capital events—SpaceX, the OpenAI, and Anthropic’s—will eventually price the uncertainty that has defined the last 18 months. At Alpha Matica, we don’t pretend


The Strait of Anti-Fragile: A Supply Chain Management Perspective
by Didier Vila, PhD – Founder and MD of Alpha Matica. Introduction The 2026 Iran War and the blockade of the Strait of Hormuz have delivered a game changing moment to global trade. Coming hard on the heels of the 2025 tariff shocks¹ ( Supply Chain Resilience Amid Tariffs and Uncertainty ), this conflict can create one of the most severe energy and logistics disruptions in modern history. With nearly 20% of global oil and significant volumes of LNG, petrochemicals, and fertili


Pillar 4: Interpretability & Monitoring
by Didier Vila, PhD – Founder and MD of Alpha Matica. This document serves as a detailed examination of the critical fourth layer in our "Architecture of Trust" framework, a unified defence-in-depth stack developed by the frontier AI community to ensure system safety. By 2026, the primary threat vector in frontier AI has shifted from external prompt injection to Internal Deceptive Alignment. Pillar 4 establishes a multi-layered "White-Box" oversight architecture. This framew


Pillar 3: Alignment and Control – Steering Frontier AI Toward Human Intent in an Accelerating World
by Didier Vila, PhD – Founder and MD of Alpha Matica. This document serves as a detailed examination of the critical third layer in our "Architecture of Trust" framework, a unified defence-in-depth stack developed by the frontier AI community to ensure system safety. While Pillar 1 focuses on external governance and Pillar 2 on internal robustness , Pillar 3 represents the "steering wheel" of AI behaviour. Its primary function is to align the model’s internal goals with com
bottom of page