The limit to AI's value is not technology. It’s privacy. AI is limited by the data available to train it and most of the world’s data is private. After training, the computing and environmental costs of AI use, or inference, depend largely on how much potentially private data is presented as context for a particular request.
For example, my text electronic health record is about 10 MB, my 23andMe genome is 25 MB, and one month of continuous glucose monitor (CGM) values is 1.5 MB for that one wearable parameter alone. On top of that we should add at least 100 MB of diagnostic imaging for a single patient. Clearly, presenting 10-100 MB of context each time we make a query to a physician or to a large language model (LLM) would be cost-prohibitive unless that physician or LLM was fine-tuned to “remember” that person’s private data. In healthcare, long-term access to a primary care physician represents fine-tuning of a patient’s context. Such longitudinal access to expert humans is increasingly hard to find unless you can afford a concierge practice. From the physician’s perspective, access to an LLM they can fine-tune as they practice, would differentiate them and enhance their reputation the way education and experience have always done.
FIne-tuning a personal LLM avoids the privacy nightmare of training a god-like frontier LLM with everyone’s private data. The personal LLM eventually becomes a digital twin of the human and able to adjust the amount of private personal data shared with frontier LLMs for either training or inference.
The introduction of a personal LLM as digital twin is therefore essential for both privacy and cost-effectiveness of AI in medicine, education, law, politics, warfare, the arts, and any other endeavor where humans are to be treated as individuals with individual reputation and accountability.
Simply put, frontier LLMs and personal AIs need each other.
For now, the focus is still on frontier LLMs. Some of them, like DeepSeek and Llama are open source and therefore can be shrunk and fine-tuned as personal AI. Recent innovations like Anthropic’s Model Context Protocol (MCP) and Google’s Agent-to-Agent (A2A) protocol are in the direction of optimizing frontier or personal LLMs for different use-cases while greatly reducing the cost of such customization. As MCP and A2A are combined with open source LLMs, privacy will improve and digital twins will emerge as privacy-preserving agents of an individual human.