Bora's Law: Intelligence Scales With Constraints, Not Compute
This is a working paper exploring an emerging principle in artificial intelligence development. As our understanding evolves, certain aspects may be refined or expanded. The core insight, however, remains constant: the relationship between intelligence, base capabilities, and constraints follows a fundamental pattern that could reshape how we approach artificial intelligence development.
The pursuit of artificial intelligence has largely focused on scaling compute power and model size, as demonstrated by the development of large language models like GPT-4. This approach has proven essential for establishing base intelligence - just as humans need fundamental language and pattern recognition capabilities before learning complex tasks. However, a more fundamental principle emerges when we examine how intelligence actually scales.
This principle, which we'll formalize as Bora's Law, reveals that intelligence follows a mathematical relationship: I = Bi(C²), where base intelligence (Bi) is amplified by the square of constraint clarity (C). This elegant relationship suggests that while establishing base intelligence remains important, the path to more sophisticated capabilities lies in understanding and implementing precise constraints.
This elegant relationship transforms not just how we think about AI development, but how we understand the future of human work itself. As intelligence becomes increasingly constrained-based, the nature of human contribution evolves from task execution to constraint engineering - a shift that fundamentally reshapes every profession.
The Fundamental Formula
Bora's Law can be expressed mathematically as:
I = Bi(C²)
Where:
I represents Intelligence/Capability
Bi represents Base Intelligence
C represents Constraint Clarity
This elegant formulation captures a profound truth about intelligence: its effectiveness scales exponentially with well-defined constraints, provided a foundational level of base intelligence exists.
Understanding Base Intelligence
The emergence of advanced AI models like GPT-4 and Claude 3.5 Sonnet has demonstrated what sufficient base intelligence looks like in artificial systems. These models exhibit fundamental capabilities - language understanding, pattern recognition, logical reasoning - that parallel human cognitive development. This parallel reveals a universal pattern in how intelligence, whether artificial or human, builds upon foundational capabilities.
Consider the levels of base intelligence in human development. Just as educational stages - from high school to PhD - represent increasing levels of cognitive capability, AI systems have evolved through similar progressions. Each level enables more complex tasks: a high school graduate can handle basic analytical tasks, a college graduate can tackle complex problem-solving, and a PhD holder can engage in original research. Similarly, modern AI models demonstrate varying levels of base intelligence, with the most advanced systems showing capabilities that match high-level human cognitive functions.
This layered development of base intelligence becomes increasingly crucial as roles transform from task execution to constraint engineering. Just as a PhD student learns to define research boundaries, future professionals must develop the base intelligence needed to engineer effective constraints. The parallel between educational levels and constraint engineering capabilities becomes clear: higher levels of base intelligence enable more sophisticated constraint definition.
Base intelligence operates on two distinct but interrelated levels. Universal capabilities - language comprehension, pattern recognition, logical reasoning - form the foundation. These are comparable to general education, enabling broad adaptation to new challenges. Domain-specific knowledge, like mathematical expertise for physics or programming logic for software development, builds upon this foundation. In AI systems, we see this same pattern: models like GPT-4 and Claude 3.5 Sonnet demonstrate strong universal capabilities while excelling in specific domains through focused training.
The critical insight here is that once sufficient base intelligence is established, it enables adaptation to multiple tasks through the application of different constraints. A human with strong base intelligence can excel in various roles not by learning each from scratch, but by applying their foundational capabilities within new constraints. Similarly, advanced AI models can tackle diverse challenges not through additional training, but through proper constraint definition. This is why a business graduate can become an effective product manager, or why GPT-4 can write both poetry and code - the base intelligence remains constant while constraints shape the specific application.
The Role of Constraints
Once sufficient base intelligence is established, whether through formal education or practical experience, humans demonstrate a remarkable ability to adapt to new tasks and challenges. Consider a college graduate entering their first job - their success doesn't come from learning everything about their specific role during their education. Instead, they succeed by applying their base intelligence within the constraints of their new position.
This pattern becomes particularly relevant as professional roles evolve. A lawyer's value increasingly lies not in memorizing cases but in defining precise legal constraints for AI analysis. A doctor's expertise shifts from routine diagnosis to engineering sophisticated diagnostic constraints. Each profession transforms through this lens: success comes from applying base intelligence to constraint definition rather than task execution.
This pattern repeats across all forms of human learning and adaptation. Someone with strong base intelligence - whether acquired through traditional education or real-world experience - can quickly master new skills not by starting from zero, but by understanding and operating within new sets of constraints. A business major can become a successful product manager not because they studied product management specifically, but because they can apply their base intelligence within the constraints of product development, user needs, and business requirements.
This fundamental relationship between base intelligence and constraints mirrors what we're seeing in artificial intelligence development. Once an AI system achieves sufficient base intelligence (like GPT-4 level capabilities), its effectiveness in specific tasks comes not from additional training or larger models, but from the clear definition and application of constraints.
Constraints serve several crucial functions:
Solution Space Reduction
Without constraints, an intelligent system must consider infinite possibilities
Constraints eliminate invalid or undesirable solutions
This focuses computational resources on viable options
Pattern Recognition Enhancement
Constraints help identify relevant patterns
They separate signal from noise
They guide learning toward meaningful solutions
Validation Framework
Constraints provide clear success criteria
They enable self-correction
They ensure consistency in outputs
Search Termination
Constraints define clear stopping conditions
They prevent infinite exploration of possibilities
They enable recognition of when a solution is "good enough"
This final function is perhaps the most critical. Without clear constraints, an intelligent system - whether human or artificial - could theoretically continue searching for "better" solutions indefinitely. Constraints not only guide us toward valid solutions but tell us when we've found an acceptable one.
Consider a writer working on an article. Without constraints like word count, deadline, and target audience, they could endlessly refine their work. It's the constraints that enable them to complete the task effectively. The same principle applies to any intelligent task, from engineering design to business strategy.
Why Constraints Are Squared
The squared nature of constraints in Bora's Law (C²) reflects a fundamental truth about how constraints interact and reinforce each other. When multiple constraints are clearly defined, their impact on intelligence isn't merely additive - it's multiplicative.
Consider a practical example: teaching someone to drive. With a single constraint like "stay in your lane," you get a linear improvement in driving performance. Add a second constraint like "maintain safe following distance," and something interesting happens. These constraints don't just stack - they interact. Every position within the lane must now also satisfy the following distance requirement, and every following distance must work within lane positioning. The result is an exponential improvement in driving safety and efficiency.
This multiplicative effect appears across all domains of intelligence. In software development:
Single Constraint: "Code must work"
Linear improvement in output quality
Add Constraint: "Code must be secure"
Every line that works must also be secure
Every security measure must maintain functionality
Result: Exponential improvement in code quality
Or in business decision-making:
Single Constraint: "Must be profitable"
Basic filter for decisions
Add Constraint: "Must be scalable"
Every profitable option must also scale
Every scaling decision must maintain profitability
Result: Exponentially better business strategies
This multiplication of constraints explains why simple rules often lead to sophisticated outcomes. Each well-defined constraint doesn't just eliminate some possibilities - it interacts with all other constraints to create a highly specific solution space. This is why C is squared in Bora's Law: it represents the fundamental interaction effect between constraints in shaping intelligent behavior.
The WBS Framework: A Natural Implementation
Just as Maxwell's equations naturally led to practical electromagnetic applications, Bora's Law leads us to a fundamental framework for implementing constraint engineering. The What-Boundaries-Success (WBS) Framework emerges as the natural manifestation of how constraints interact with base intelligence.
Consider the squared nature of constraints in I = Bi(C²). For any given task, we need a systematic way to define and implement these constraints to achieve the multiplicative effect. This leads us to three fundamental components that mirror the mathematical structure:
What (W) defines the transformation of base intelligence into task-specific capability. It provides the direction for the system's intelligence, much like a vector gives magnitude and direction to a force.
Boundaries (B) implement the constraints that create the multiplicative effect. These are not mere limitations but rather the structural elements that enable the C² term in Bora's Law to manifest.
Success (S) provides the closure condition for the constraint space, completing the mathematical framework by defining when a solution satisfies all constraints.
The relationship between these components isn't arbitrary - it's a direct consequence of how intelligence interacts with constraints. When we define What we want, establish clear Boundaries, and specify Success criteria, we're effectively engineering the constraint clarity (C) term in Bora's Law.
Current Methods in Light of WBS
Modern prompting techniques like Chain of Thought, Tree of Thoughts, and Reflection represent sophisticated approaches to searching solution spaces. These methods aren't wrong - they're just operating in an unnecessarily large solution space. Consider:
Chain of Thought(CoT) provides a structured way to explore possibilities
Tree of Thoughts(ToT) creates branching paths through the solution space
Reflection enables self-correction and iteration
However, without proper constraints, these methods must search through vast, often infinite possibilities. The WBS Framework doesn't replace these techniques; rather, it makes them exponentially more effective by first constraining the space they need to search.
This is the natural consequence of Bora's Law: the C² term shows us that properly constrained intelligence is exponentially more effective than unconstrained search, regardless of the search method used.
This framework isn't just theoretical - it maps directly to practical implementation across multiple technical layers.
Implementation Layers
The WBS Framework operates across multiple technical layers, much like how modern computer systems are structured. This layered architecture ensures that constraints are both flexible enough for specific applications while maintaining fundamental guarantees.
Consider the following implementation layers:
Model Weight Layer Base intelligence and fundamental constraints are implemented at the weight layer of neural networks. These constraints - like alignment, safety, and basic reasoning capabilities - function similarly to CPU architecture in modern computers. They are immutable to higher layers, ensuring that core guarantees remain intact regardless of application-specific constraints.
Prompting Layer Most WBS implementations occur at this layer, where task-specific constraints are defined and applied. Like application code in computer systems, this layer provides flexibility while respecting the boundaries set by lower layers. A marketing task's constraints, for instance, cannot override fundamental safety constraints implemented at the weight layer.
Execution Engine The execution engine serves as the crucial bridge between layers, verifying constraint satisfaction across the system. It returns detailed feedback about which constraints are fully satisfied and which are only partially met. This verification process operates similarly to how operating systems manage and validate application behaviors while preserving system integrity.
This layered architecture ensures that while application-specific constraints can be freely defined and modified, they cannot interfere with more fundamental constraints. For example, task-specific boundaries for a coding project cannot override basic safety constraints, just as application code cannot modify CPU architecture.
The layered implementation of WBS provides a bridge between theoretical understanding and practical application, leading us to consider its broader implications.
Implications to Current Development
The WBS Framework's emergence from Bora's Law has profound implications for current AI development. Just as understanding Maxwell's equations transformed our approach to electromagnetic engineering, understanding the fundamental relationship between intelligence, base capabilities, and constraints transforms our approach to AI advancement.
The current focus on scaling compute and model size remains essential - it builds the base intelligence (Bi) term in our equation. However, Bora's Law reveals that the most significant performance gains come from properly engineering constraints through the natural structure of What-Boundaries-Success.
Consider the computational efficiency implications:
While increasing base intelligence requires exponential compute resources
Improving constraint clarity through WBS provides multiplicative gains
The C² term amplifies existing base intelligence without requiring additional compute
This understanding doesn't oppose current scaling efforts but rather complements them. Once sufficient base intelligence is established (as with current large language models), the path to higher capability isn't through more compute alone, but through precise constraint engineering using the WBS Framework.
The Test-Time Compute Paradigm
The industry's growing focus on test-time compute scaling represents a powerful approach to exploring solution spaces. This computational capability becomes even more powerful when combined with proper constraint engineering. While companies invest heavily in inference infrastructure and novel search methods, the fundamental challenge remains: how to make this compute more effective. Consider the relationship:
Without constraints: test-time compute must navigate through vast, unbounded possibility spaces where success is largely probabilistic. In this unconstrained environment, finding a solution becomes akin to searching for a needle in an infinite haystack. What works in one instance may fail in the next, as each computational run explores different paths through this boundless space. The lack of clear boundaries means that success often depends more on chance than strategy, and reproducing successful results becomes nearly impossible as the search space remains infinite and undefined.
With WBS Framework constraints: however, the same test-time compute resources operate with extraordinary precision and efficiency. By establishing clear boundaries and success criteria, we transform an infinite search space into a well-defined domain. This constrained environment enables test-time compute to work deterministically, producing consistent, reproducible results. The presence of explicit success criteria means we know exactly when to stop searching, making the entire process more efficient and reliable. What was once a probabilistic search becomes a deterministic operation, with each unit of compute power working within meaningful boundaries toward clear objectives.
This is like having a powerful search algorithm: it becomes vastly more efficient when you know exactly where to look. Test-time compute isn't wrong - it's just operating in an unnecessarily large solution space. By first applying the WBS Framework to constrain the space, we transform a probabilistic search into a deterministic one, making every additional unit of test-time compute exponentially more effective and reliable.
This transformation mirrors the evolution of professional work itself. Just as we make test-time compute more effective through constraints, we make human work more valuable through constraint engineering. The professional of the future isn't competing with AI on task execution but collaborating through constraint definition.
Think of it like the development of aerodynamics: while more powerful engines were essential for flight, understanding the fundamental laws of lift and drag transformed aviation. Without this understanding, we'd still be trying to achieve better performance through engine power alone. Similarly, while more powerful models and search methods matter, understanding and implementing proper constraints through WBS leads to exponentially better performance.
New Direction for Development
This understanding points toward a more efficient development path, one that leverages both existing investments and new insights:
Establish sufficient base intelligence through current scaling approaches
Apply precise constraint engineering through the WBS Framework
Achieve multiplicative performance gains through the C² effect
Use existing search methods (CoT, ToT, Reflection, CoT with TTC) within constrained spaces
This path forward reflects not just AI development but the evolution of human work itself. As we better understand how intelligence scales with constraints, we transform how humans contribute to productive systems.
The result is a more efficient, more predictable path to AI advancement - one that emerges naturally from the mathematics rather than from trial and error. By properly constraining the solution space first, we make every existing technique exponentially more effective while reducing the computational resources required. This approach aligns perfectly with the mathematical reality of I = Bi(C²), where improvements in constraint clarity (C) provide multiplicative gains regardless of the search method employed.
The Transformation of Work
The implications of Bora's Law extend beyond AI development to reshape the fundamental nature of human work. Just as the industrial revolution transformed manual labor into machine operation, the AI revolution transforms task execution into constraint engineering.
Consider the evolution of professions:
Software developers shift from writing code to defining success criteria and architectural boundaries
Managers evolve from directing tasks to engineering constraint systems that enable AI-driven execution
Creative professionals move from production to defining artistic constraints and success metrics
Medical professionals transition from routine diagnosis to engineering diagnostic constraints and validation criteria
This transformation follows a natural progression. When intelligence scales with constraints, human value lies in the ability to define these constraints effectively. The future belongs not to those who can execute tasks most efficiently, but to those who can define task constraints most precisely.
Education and training must evolve accordingly. Future professionals need to understand:
How to analyze and decompose complex tasks into clear objectives
How to define precise operational boundaries
How to establish measurable success criteria
This isn't just another workforce transformation - it's a fundamental shift in how humans contribute to productive systems. Just as we evolved from physical labor to knowledge work, we now evolve from knowledge work to constraint engineering.
Conclusion
Just as Maxwell's equations revealed the underlying principles of electromagnetism, Bora's Law reveals the fundamental relationship between intelligence, base capabilities, and constraints. This understanding comes at a crucial moment in artificial intelligence development, as the industry grapples with the challenges of scaling intelligence. While the pursuit of better base intelligence continues, Bora's Law shows us that the next great advances in AI will come not just from more powerful models, but from better understanding and implementation of constraints.
P.S. We're building a community of engineers, researchers, and builders focused on constraint engineering and the practical applications of Bora's Law. Join the discussion: Discord link