Control Systems and Electrical Engineering to AI in Finance
Audio Brief
Show transcript
Episode Overview
- Explores the shift in financial technology from bundled, closed ecosystems (like Bloomberg) to open, unbundled infrastructures where users control the interface and data integration.
- Examines the intersection of control systems engineering and finance, discussing how principles like feedback loops and robustness apply to market analysis and software design.
- Discusses the evolving role of the financial analyst, detailing how AI agents are lowering technical barriers and allowing domain experts to build bespoke tools without coding expertise.
- Covers broader engineering philosophies, including the value of synthetic data in AI training, the differences between hardware and software development cycles, and the "vibe check" metric for choosing AI models.
Key Concepts
-
Decoupling Interface from Data: OpenBB represents a paradigm shift where the "workspace" (presentation layer) is separated from the data source. Unlike bundled terminals where you buy software and data together, this unbundled approach allows firms to use their own data (Snowflake, internal APIs) within a secure dashboard, ensuring data sovereignty and preventing vendor lock-in.
-
The "Local Optima" Problem: Legacy financial software vendors optimize for the widest possible user base, resulting in products that are "good enough" for many but excellent for none. They cannot offer extreme customization without alienating the average user. A modular infrastructure solves this by letting the end-user, not the vendor, control the workflow and layout.
-
AI-Driven Role Convergence: The historical wall between financial analysts (who need tools) and developers (who build tools) is collapsing. With AI agents capable of writing code, analysts can now describe the dashboard or integration they need, and the system builds it. This turns domain experts into builders, removing the friction of waiting for engineering teams.
-
Control Theory in Finance: Drawing from electrical engineering, the podcast contrasts the finance industry's obsession with complex predictive modeling against "robust control." Effective systems act like PID controllers—focusing on reacting efficiently to real-time feedback loops rather than attempting to perfectly predict an uncertain future with black-box models.
-
Simplicity vs. Academic Complexity: There is a disconnect where academia teaches complex, state-of-the-art models (like nonlinear model predictive control), but industry relies on simpler, older technology (like PID controllers or simple moving averages). Simple models are preferred because they are robust, explainable, and their failure states are easier to diagnose than complex "black box" systems.
-
Synthetic Data for Edge Cases: In safety-critical AI (like autonomous driving), real-world data has diminishing returns because dangerous "black swan" events happen rarely. Synthetic data generated in simulation is superior here, allowing engineers to artificially create thousands of variations of rare, dangerous scenarios to ensure the AI learns the correct safety policy.
Quotes
- At 1:36 - "We built an infrastructure that allows you to mimic that software, but that you host on your own premises... and allows you to connect with your data... and we don't have access to anything." - Explaining the core privacy and security value proposition: the vendor provides the container, not the content.
- At 4:33 - "They [legacy vendors] are stuck in this minima where the product is good enough for a lot of people... but is not great for anyone because they are not allowing more access." - Highlighting the limitation of closed-source, bundled financial terminals.
- At 6:52 - "Because we don't own the data, then you can also control at the data level... the data is not going to render to them on the workspace and that's a feature, not a bug." - Explaining how security permissions are inherited from the data source rather than managed loosely in the app.
- At 10:06 - "Analysts and PMs are just going to be empowered to have more control over what they build... we are kind of splitting them in two, but in a way, they are kind of converging." - On how AI agents are allowing non-technical staff to perform engineering tasks.
- At 15:39 - "Even if I could [afford a Bloomberg terminal], I didn't have the control that I wanted... As an engineer... I wanted to possess that data and create an estimate." - The origin story of OpenBB: frustration with the lack of programmatic control in professional tools.
- At 26:35 - "Then you go into the industry, and like everyone use the same PID controller... simple, three terms... And it's like, it works incredibly well for everything you can think of... but then in the industry everyone uses PID." - Illustrating how simple, robust legacy systems often outperform complex academic theories in real-world application.
- At 37:13 - "When you are working with hardware... then you appreciate software. Because if you want something different, it's not like you can just build it and compile... you will have to have these massive cycles." - Highlighting the difficulty of hardware engineering compared to the rapid iteration cycles possible in software.
- At 39:27 - "How many times... will you see like a child running in front of the car?... In a simulation world, you can recreate that as many times as you want... I think that actually synthetic data for that use case is the best." - Arguing that for safety-critical AI, simulated data is valuable because it allows for infinite testing of dangerous edge cases.
- At 51:17 - "I'm not loyal to any company. I'll use whatever the best coding model [is]... I think that it's hard to have a moat in this space because we're not really loyal to anything, we're loyal to the best intelligence." - Explaining why foundation model companies struggle to build moats; developers will instantly switch to whichever model provides better results.
Takeaways
- Prioritize Data Agnosticism: When building or selecting financial tools, look for platforms that do not couple the interface with the data. This allows you to mix proprietary internal data with public feeds without compliance or technical conflicts.
- Use AI to "Reverse Engineer" Outputs: Instead of prompting AI from scratch, feed it a finished, high-quality report or dashboard and ask it to write the code to recreate that style. Use AI to build your process, not just your final content.
- Choose Robustness Over Complexity: In both engineering and trading strategies, favor simple, explainable models (like moving averages or PID controllers) over complex black boxes. Being able to explain why a model failed is critical for risk management.
- Leverage Simulation for Policy Training: If you are building AI for safety-critical environments where real-world errors are unacceptable, shift focus from collecting more real data to generating synthetic data that specifically targets edge cases.
- Adopt the "Vibe Check" for Tool Selection: Don't rely solely on benchmarks when selecting AI models for development. Trust the tactile experience of coding with the model; if the workflow feels faster and the logic sounder (as seen with the shift between Gemini and Claude), that efficiency often outweighs raw benchmark scores.