I haven't written anything in Microsoft Word in over a year. I write in .md files. My spreadsheets are CSV files, data is in a local DuckDB, and everything is tracked in git. My entire sales engineering workflow runs through the same toolchain I use to write code. Customer emails, technical requirements, pipeline tracking, all of it. And I can talk to it in plain English.
This isn't about being a developer playing knowledge worker. I'm a knowledge worker who discovered that software engineering practices, combined with the breakthrough of large language models and MCP, solve white-collar problems better than traditional white-collar tools ever could.
I didn't start my career writing code. I spent years in tech strategy and knowledge management. The work was about coordinating stakeholders, synthesizing complex information, making strategic recommendations. I lived in PowerPoint for executive briefings, Excel for tracking initiatives, Word for white papers, and SharePoint for that digital filing cabinet where documents go to die.
I was good at this work but was constantly fighting my tools.
Every presentation started from scratch because last quarter's deck got lost in someone's email. Every spreadsheet had five versions scattered across shared drives (FINAL, FINAL_v2, FINAL_ACTUALLY_FINAL). Strategic documents just sat there as frozen artifacts. No way to see how we got there, why we decided anything.
The real pain was context switching and consistency. I'd spend my morning in Salesforce tracking progress, then export to Excel for analysis, copy insights into Word for documentation, paste tables into PowerPoint for presentations, upload everything to SharePoint for "version control," and then email the whole mess to stakeholders with links going every which way.
Each tool was an island that required its own mental map to get around. The integration & context layer was me, manually copying and pasting, trying to keep it all synchronized in my head.
Then I learned how to code.
The more code I wrote, the more I questioned everything about knowledge work. Why don't we version control strategic decisions? Why is copy-paste our integration layer? How much context am I losing jumping between platforms all day, all week, all month?
Then large language models arrived and everything changed.
Natural language became a real interface to information. AI could read schemas, call APIs, synthesize data across sources. Most importantly, it could maintain a consistent stream of context.
Here's the traditional knowledge work stack. If you're reading this, you're probably living it.
These three layers should work together. Instead, they actively fight each other:
The Document Layer:
The Enterprise Data Layer:
The Integration Layer:
Your morning starts in the Document Layer (reviewing that PowerPoint), moves to the Enterprise Data Layer (pulling Salesforce reports), and ends in the Integration Layer, which is you, manually stitching it all together while the tools watch.
This isn't anyone's fault. These tools were built when documents were the atomic unit of knowledge work. They've been incrementally improved, but the fundamental model stayed the same. Create isolated artifacts, store them in silos, manually integrate.
To further complicate things, countless other tools promised to solve the context problem but most just add another layer of complexity. You're often not solving the context problem. Just adding to it.
These tools don't compose. You can't pipe data from one to another without access to technical folks who have the skills to build these solutions. If you don't have a data team, you're doing all of this manually.
My current stack for sales engineering work looks completely different. Managing customer engagements, tracking technical requirements, writing proposals, coordinating deals.
The breakthrough is AI paired with FastMCP can now serve as the universal interface layer between natural language and structured systems. This makes everything else work.
1┌─────────────────────────────────────────────────────────────┐
2│ THE KNOWLEDGE WORK STACK │
3├─────────────────────────────────────────────────────────────┤
4│ │
5│ ┌─────────────────────────────────────────────────────┐ │
6│ │ INTERFACE LAYER │ │
7│ │ • SuperWhisper: Voice-to-text for natural input │ │
8│ │ • Cursor + Claude Code: AI-native code editor │ │
9│ │ • Natural language as universal interface │ │
10│ │ • Single pane of glass for all work │ │
11│ └─────────────────────────────────────────────────────┘ │
12│ ↕ │
13│ ┌─────────────────────────────────────────────────────┐ │
14│ │ CONTEXT LAYER │ │
15│ │ • FastMCP: Custom tools for domain-specific work │ │
16│ │ • MCP Resources: Schema & documentation as context │ │
17│ │ • Tools compose naturally, call each other │ │
18│ └─────────────────────────────────────────────────────┘ │
19│ ↕ │
20│ ┌─────────────────────────────────────────────────────┐ │
21│ │ INTEGRATION LAYER │ │
22│ │ • Superhuman: Email as queryable data source │ │
23│ │ • MCP connectors: Salesforce, Notion, etc. │ │
24│ │ • APIs: Direct integration with enterprise systems │ │
25│ └─────────────────────────────────────────────────────┘ │
26│ ↕ │
27│ ┌─────────────────────────────────────────────────────┐ │
28│ │ DATA LAYER │ │
29│ │ • DuckDB: Local analytical database │ │
30│ │ • CSV files: Version-controlled structured data │ │
31│ │ • Markdown: All prose and documentation │ │
32│ │ • Git: Version control for everything │ │
33│ └─────────────────────────────────────────────────────┘ │
34│ │
35└─────────────────────────────────────────────────────────────┘
36
Model Context Protocol (MCP) is Anthropic's standard for connecting LLMs to tools and data sources. FastMCP is a Python framework that makes building these connections dead simple. Think "Flask for AI tools."
Without MCP, an LLM only works with what's in its context window. You manually paste data, explain systems, provide context every time. It's like hiring an analyst but making them start from scratch every conversation.
With MCP, you build tools once that expose your data and workflows to the LLM. The LLM can query databases, read from Salesforce or Snowflake, update structured data, generate reports, sync across systems. All through natural language. You encode your domain knowledge once, in tools, then interact with it naturally.
So what does building in this new stack look like? Here's an example.
Here's what an MCP tool looks like:
1from fastmcp import FastMCP
2
3mcp = FastMCP("sales-engineering")
4
5@mcp.tool()
6def add_engagement_tool(
7 company_name: str,
8 account_executive: str,
9 sales_engineer: str,
10 stage: str, # Demo, Pre POC, Ongoing POC, Post POC
11 timeline: str = None,
12 notes: str = None,
13):
14 """
15 Add a new engagement (customer + opportunity + activity).
16
17 Creates customer if doesn't exist, adds opportunity, creates initial activity.
18 Stage must be: "Demo", "Pre POC", "Ongoing POC", or "Post POC".
19 """
20 return add_engagement(
21 company_name=company_name,
22 account_executive=account_executive,
23 sales_engineer=sales_engineer,
24 stage=stage,
25 timeline=timeline,
26 notes=notes,
27 )
28
One function, decorated with @mcp.tool(), and my Claude Code instance can now create customer records.
I speak via SuperWhisper: "Take a look at the most recent transcript for company X, create an engagement, assign it to Darren and myself, and update Notion." Tool fires, database updates.
Tools compose. I have tools that query DuckDB, pull from Salesforce, generate reports, sync to Notion. Each one simple. Together, they orchestrate my entire workflow. Claude Code chains them automatically based on what I ask.
You're encoding business rules into the LLM's behavior. How should a report be structured? What data should someone have access to? In what order should processes execute? What validation rules apply? You code all of this into your MCP tools and the tools become the instantiation of your business logic. The LLMs will execute processes using your internal rules, governance, and workflows.
This is engineering practices for knowledge work: small, composable functions that solve specific problems, encode business rules, and combine into governed workflows.
The future of data engineering includes building context for LLMs.
LLMs will become how most people interface with enterprise systems. Business users, general employees, non-technical folks. Natural language will replace specialized UIs, complex workflows, and domain-specific training. But LLMs can only be as useful as the context you give them.
Who builds that context layer? Data engineers.
Think about what happens when someone asks an LLM: "Show me Q4 sales performance for the Northeast region."
Which data warehouse to query and how to connect. What "Q4" means for your fiscal calendar. How "Northeast region" maps to your territory definitions. What access controls apply to this user. How to format results for their role. Where to log this request for audit.
That's not AI doing the work automatically. It’s the infrastructure and context layer. Building it is data engineering work.
The organizations that win with AI won't have the best models. They'll have the best context layers. The richest connections between LLMs and enterprise reality. The most reliable orchestration of complex processes behind simple questions.
Data engineers who master both data orchestration (Prefect Cloud) and context orchestration (FastMCP) will be the ones who enable their organizations to use AI effectively.
At the end of the day, it's all context.
If you're a data engineer who masters both data orchestration and context orchestration? You will be essential.
Next time someone asks why you're writing markdown instead of Word, or tracking data in CSV instead of Excel, or building MCP tools instead of using enterprise platforms, send them this. Then commit your work and get back to building.