top of page
  • Writer's pictureJoseph Hunt

Rationality Constraints: Man ⇌ Machine

The majority of articles on our blog feature economic and social approaches for adaptation within an ongoing negotiation. However equally important (and perhaps more so) is how we retrospect on the process and plan for the future. Were we “successful” at achieving our goals for a contract? What could be done better? In many cases the answers to these questions are vague and subjective, often rightfully so. While analytics can tell us the degree of deviation from market value, past contracts, and our pre-planned goals, they fail to describe why that variation occurred within the context of a single negotiation. It is this reflective “why” that we’ll focus on here, considering situational awareness and some problematic limits to logic.



Cells & Silicon

A core premise in analyzing a choice is the problem of using past contracts or market-focused analysis when it comes to a particular agreement. We have previously developed the idea of approaching each negotiation as a unique event, and these principles begin to explain why linear computational analytics fail. Traditional economic methods are fundamentally based on existing patterns; to draw conclusions about what might or should occur, we look at past and present data and draw regression lines. Similar situations should produce similar results, and indeed to some degree (depending on the robustness of your models) they do. The more advanced next-gen procedures can adapt to this deviation as well by inscribing the range of variation within the model itself. This however begs the question; why are all analytics so consistently error-prone regardless of which (or how many) systems are used? And why do “black swan” events keep occurring without rhyme, reason, or warning?

It is said a dart-throwing monkey is as effective as any stockbroker when it comes to predicting the future. This can be ascribed to chaos, the effect that small eddies in the current can lead to a whirlpool downstream. And while this explains the lines of causality that lead to unexpected outcomes, it doesn’t really explain why our brains are so poorly wired to process things in this way. If reality is fundamentally stochastic along every axis, why is human decision making so different from even the more advanced analytic models described before in the complexity series? Yet we seem to do pretty well, having survived and developed over millennia and beating or matching many a computer system in some domains. If there is such a difference between, surely something must be wrong in either the models or with mankind itself! Why have human experts at all: just let the machines do everything. Or the inverse, abandon calculation and just use organic logic and intuition. Exploring this question leads to the conclusion that machine needs man, just as much as man needs machine.

Universal and Particular Collide

Deleuze observes that “[there is] an irreducible difference between generality and repetition. Repetition and generality are opposed from the point of view of conduct and from the point of view of law” (Difference and Repetition). At their core, the two concepts are inherently at odds, with generality positing an infinite thread with which to identify an object by its fundamental elements. Repetition however implies simultaneously both a commonality (the identification of similarity) as well as difference (by not being literally The Same, they are fundamentally independent things that share features). The two have no relation; they are rationally incompatible with the former employing only pure universals while the latter only particulars.

But in practice we must use both in order to make sense of the world. What would law be without generality? We need universals to act as standards in guiding action, even if the implementation can’t be sufficiently defined across all situations. And we also require repetition: contracts or clauses can belong to the same class for comparison and yet have vastly different contents. This duality (universal and particular) is both the core of rationality and also its downfall. If they’re incompatible, why don’t we just pick one and derive all our conclusions from it alone? Computation is exactly this approach: the identification of the specific as the basis for logic. A system can’t understand universals, only extend its rules of particulars along a predefined space. Humans however employ primarily the alternative: we work largely from generalities, using specifics mainly to build and apply heuristics. Each approach is flawed.

Decomposing the Real

Repetition/particularity breaks when it approaches something new: anything for which the system is not sufficiently pre-programmed causes rationality designed from the specific to lock. Fair enough, we’ll just build contingencies for unexpected events so the program can adapt. But any such proactive elements will inevitably fall short of preparing for everything: there are simply too many factors to account for, too much data to analyze. We must sacrifice some accuracy in exchange for functionality. This leads to a problem: a model without every single thing accounted for does not contain or even truly mimic reality. By not accounting for a contextual feature, it is inevitably going to shatter. And we know that including all variables isn’t only unreasonable, but impossible as well. There is always something missing, some objet petit a which escapes the model. Presuming that simulation in any way accurately reflects what is “real” or “true” is assuming too much. Such is the case “…in a world completely catalogued and analyzed, then artificially resurrected under the auspices of the real, in a world of simulation… the hallucination of truth, of the blackmail of the real” (Baudrillard, Simulacra and Simulation). Lack of true abstraction prevents effective transfer of rules established from the specific to broader contexts: this is the destiny of the machine.

Will mankind do better? Hardly. Generality/universality is a means of deriving rules from the particular, but the nature of the specific is such that any overarching heuristics will fail to meet all situations effectively. Just like in over-specificity, over-generalization similarly breaks because all situations are unique. As before, repetition does not imply generality; similarity itself suggests difference. And yet we cannot apply the rules of the specific like machines do. We require generalizations to think, and these unquestioned abstractions coalesce together in the construct of ideology 一 our individualized set of rules enable turning our focus to higher-order complexities without getting caught by tiny details. “Ideology is not a dreamlike illusion that we build to escape insupportable reality; in its basic dimension it is a fantasy-construction which serves as a support for our ‘reality’ itself” (Zizek, The Sublime Object of Ideology). Lack of true specificity causes theoretical rules to be coarsely applied in context, resulting in poor adaptation to new data. Thus the very thing we need to rationalize itself poses an upper limit to our effectiveness in applying logic.

The conclusion, then. Generality and repetition are inherently incompatible, yet simultaneously necessary. Man and machine operate largely on the basis of one axis; human the former and computer the latter. As each fails in implementation for their own reasons, yet both are essential, the choice between is a catch-22. Generality proposes what is best across all environments, and specificity allows more consistent application in each situation. They require the other 一 cells & silicon.


About Intellext™

Intellext is an AI startup that is revolutionizing the way contracts are negotiated, accelerating time to close, and improving deal terms. Intellext’s Intelligent Negotiation Platform™ eliminates the complexities of contract redlines and stakeholder collaboration and optimizes deal terms by applying machine learning during the negotiation process.






bottom of page