On Building an Engine to Understand People and Context
Understanding people is not simple.
Most management models assume that human work can be represented, ordered, and controlled through fixed structures: roles, departments, skills, levels. They impose static taxonomies onto living systems, forcing adaptive complexity into rigid schemas.
For decades, this mindset has focused more on maintaining formal consistency than on the challenges, potentials, interests and the dynamic contexts where people work.
But understanding people is not simple.
It’s not enough to know what someone does or even what they say they can do.
It’s not enough to know why they do it or which motivations they claim.
It’s not enough to know how they work or which frameworks they use.
Meaning lives in the complexity of the in-between, in contradictions, in evolving relationships, which only become clear over time.
We’ve been working to make that complexity visible by designing around it more intelligently.
This led us to a simple but persistent question: how can we represent the dynamics of human relationships in ways that non-human systems can interpret and reason through?
So far, our answer has been to move away from static taxonomies and build Khipu, our internal reasoning engine.
Context is all you need
If Khipu isn’t a taxonomy, then what is it?
Technically, it’s a graph-based inference engine that connects structured and unstructured data using principles from semiotics, logic, and language.
But more than the theory, what matters is its behavior. Khipu is a machine that reads relationships, infer possible actions, and in many cases trigger their execution, all grounded in the real movement of people in their contexts.
And context here isn’t a detail, it’s a design constraint.
What we’re building is not automation-as-usual, it’s a reasoning system. Automation operates on what’s already known, reasoning works with what’s still taking shape. Khipu bets on the latter.
Khipu learns from signals and explicit connections, but also from gaps and ambiguities. It doesn’t just respond to what’s most common, its looks for what makes sense in the moment for each agent in their unique environment.
Generating meaningful inference is more than recognizing a pattern. Often, it’s about noticing what’s missing, what only becomes visible through the right contextual lens.
Without context, there is no meaning. Without meaning, there is no effective action.
A Symbolic Cognitive Artifact
That’s why we see artificial intelligence not as an external tool, but as a symbolic extension of human cognition.
This view is grounded in a semiotic understanding of reasoning as an interpretive process rooted in situated relationships. Thinking doesn’t happen only inside our heads, it unfolds across systems.
Cognition can be extended by external structures that actively help us perceive, decide, and act. Khipu is one of those structures: an active layer that helps us think with the world, not just about it.
Khipu doesn’t show up in our product’s interface. You won’t see it in the UI. But everything the system suggests, triggers, or enables begins in that invisible layer, acting as an extension of your own reasoning process.
Incomplete by Design
By embracing incompleteness as a condition, we acknowledge that intelligence is not a final state. It is a way of staying in motion, learning continuously, in real time.
We are not trying to offer the definitive word on work, people or data.
Khipu was not designed to capture reality with precision, it was designed to move with it, with flexibility, interactively and always unfinished.
João
Cofounder of Kipon