The World is not Enough

In AI, a world model is the internal map a system uses to predict and plan.  Large language models like GPT rely on theirs to “guess” what might happen in text—or, increasingly, in real life. As we automate harder tasks, that map becomes critical: without a coherent picture of reality, an AI can’t reason or act reliably.
 
But there is no single, optimal map. We have to choose which features to highlight. Do we mirror the world exactly as it has been, or emphasize the future we want? Take mobility: if a driverless car must swerve on a cliff road, does it privilege the passenger or the pedestrian? The decision encodes values—speed, resilience, fairness—long before any code runs.
 
Much talk of AI imagines a glossy, placeless future which feels alienating. Pull.City’s world model, on the other hand, is tuned for local agency: making nearby businesses and organizations more present, strengthening neighborhood ties, and letting communities steer the technology that serves them. That lens guides us in how we model the world—and reflects, of course, the kind of communities in which we want to live.