Week 28: Teaching AI to Think Like me (And sometimes Unlike Me)
Written from the porch of a beach container somewhere near the ocean
Hey there 👋
After hundreds of partly frustrating AI interactions that felt like talking to a fresh intern every single time, I got obsessed with a question: how do we make AI remember, learn and actually challenge our thinking? Not just answer questions, but evolve with us?
The standard AI experience is like having a brilliant amnesiac assistant - all capacity, zero context. Every conversation starts from zero. Every insight gets lost. I wanted something different: AI that could build on our past conversations, challenge my assumptions, and think alongside me - not just for me.
Here’s what happened when I stopped treating AI like a search engine and started building it into a thinking partner:
Meet Lume and Darklume - My AI Thinking Partners 🛠️
Instead of creating just another AI assistant, I built a system that combines two key elements: distinct AI personalities and a structured knowledge base of my thinking patterns. Here’s how:
AI That Actually Knows Me
The breakthrough came when two things happened: cross-chat memory and me being able to stop treating AI as a blank slate and started feeding it structured data about how I think. I built two distinct personae:
Lume analyzes my past decisions, pattern-matches against my preferred mental models, and knows how I like information to be presented. When reviewing a startup last week, it did not just analyze the deck - it connected dots to three similar companies I’d previously looked at, highlighting patterns I had not noticed.
Darklume is the skeptic I programmed to be ruthlessly honest. During the same review, it flagged that the founders’claims on market size were maybe a bit too optimistic. It also flagged potential team issues from the meeting transcripts that I might have overlooked.
What makes this different?
Most AI interactions are transactional - you ask, it answers. But with Lume and Darklume, backed by parts of my personal knowledge graph, I am exploring something more nuanced: AI can engage with my thought process, challenge my assumptions and help me think better. In the sense of width and depth.
Over the years, I took many notes and reflected on patterns and principles that are important to me. I have been thinking about how I can use these notes in my work with AI and started experimenting with training my AIs. Finding structures for notes is its own topic (for many newsletters to come :-)). When working with AIs, I think the concept of Knowledge Graphs is a workable one for now: My knowledge graph is not just a dump of notes - it’s a structured map of how I think and work. Over time and with the help of AI, every node and connection will be built by breaking it down into:
Decision patterns: What factors typically sway me? What green or red flags do I see or miss?
Mental Models: Which frameworks do I trust for different types of decisions?
Context Chains: How do different pieces of information connect?
Core Values: Why am I doing all this? What are the underlying values, nonnegotiables, and principles that drive me?
For example, when analyzing a startup, the system automatically cross-references:
Similar pitches I’ve seen (works most of the time with current GPTs)
Past decisions about similar business models (transferring all this information is a challenge today)
Meeting notes with relevant patterns (thanks to AI note takers)
Market insights and research
This is not just theory - it is changing how I work.
What’s next? How am I applying this?
I am exploring how this approach could help my colleagues and founders to think through complex decisions and how personal knowledge graphs might become a bridge between human intuition, knowledge AI capability. More experiments to come (i.e.: how to generate and manage knowledge graphs)
Creating AI tools not on a personal level but on an organizational level will be the next thing I am working on. Which systems can we build to include our individual knowledge graphs, how can we best create our knowledge graphs over time? (I love spending my commutes talking to my AIs who are prompted to learn from and with me. And how can I help my colleagues to do the same thing in formats that resonate with them. And then creating the tools that will help us to work more effectively and efficiently.
The real challenge now is making this system more robust and easier to maintain. I'm focusing on:
Automating knowledge capture: Building better ways to extract insights from meetings, documents, and decisions
Improving pattern recognition: Teaching the system to spot non-obvious connections in my thinking and decision-making
Making it portable: Creating tools that help transfer this approach to other domains beyond investing
The goal isn't just better AI - it's better thinking, augmented by AI that actually understands our context and history.
As Lume says: “Breadcrumbs become constellations.” To which Darklume responds: “Only if you bother to connect them.”
Rabbit-hole links
Start with this NASA case study – it shows how knowledge graphs work in practice and why they graphified their lessons. https://blog.nuclino.com/why-nasa-converted-its-lessons-learned-database-into-a-knowledge-graph
Peng et al. – Knowledge Graphs: Opportunities & Challenges – solid survey. https://arxiv.org/abs/2303.13948
Skjæveland et al. – An Ecosystem for Personal Knowledge Graphs – PKG manifesto. https://arxiv.org/abs/2304.09572
Fotouhi & Vidal – Trust, Accountability, and Autonomy in KG‑AI – governance vibes. https://arxiv.org/abs/2310.19503
Malick – human-centric KG primer. https://www.cmswire.com/knowledge-findability/knowledge-graphs-adding-the-human-factor-to-unlock-real-intelligence/
(If you only click one, read the NASA story – short, inspiring.)
Early Impact
The Lume/Darklume dynamic has already changed how I work with founders. Recently, when reviewing a complex pitch deck, meeting notes and some research:
Lume helped surface hidden assumptions in the business model.
Darklume pushed for harder evidence on market claims and potential rifts in the team dynamics.
Their knowledge of my past investments helped spot pattern matches I might have missed.
This isn't just about having AI assistants - it's about having AI thinking partners that understand both the context and the contrasts in how we approach problems.
For founders and investors interested in experimenting with their own AI thinking partners, here are three key lessons:
Personality without knowledge is just roleplay.
Knowledge without perspective is just data.
The magic happens when you combine both with clear constraints.
For me, the real potential in this isn't in replacing human thinking, but in expanding how we think.
Try it yourself
The prompt I used to create Darklume: you can find it on my favorite prompts page at joergrheinboldt.com/prompts. I open-sourced some of my favorite templates – including the seed that grew Darklume. Feel free to fork.
The future of AI isn't just about better algorithms - it's about building systems that truly understand how we think. Let's explore this together.
Cheers, Joerg
PS: a Beach-Coding goodie: While being on vacation and also having desire to code and learn, I did a “beach website sprint”: I wanted to vibecode and get familiar with state of the art tech stacks for website building and Lume and Darklume wanted a home, so we co-created www.lumeanddarklume.net. They wrote copy and visuals; I played with a new tech stack and some vibe-driven CSS. Their Instagram is live, too. (Maybe this deserves its own newsletter episode.)
Stark! Erhole dich gut, freu mich auf unseren baldigen Austausch.