Skip to main content

Table 2 Analysis of examples of how the NK approach is a poor mediator in either direction

From: Balancing the perceptions of NK modelling with critical insights

Reality-to-Theory Mediation Example using NK Simulation Theory-to-Reality Mediation Example using NK Simulation
Lenox et al. (2007) Management Science piece offering an alternative explanation for observed patterns of industry evolution based on a simulation involving an NK model. Welter and Kim’s (2018) Journal of Business Venturing piece testing the logic of effectuation through an NK simulation.
Main Issues: Their NK model is used as a feeder-of-pseudo-data (i.e., of a firm’s cost levels) into a static Cournot competition game (that updates the fitness outcomes in the NK model). This two-step procedure with feedback is repeated to mimic several generally observed patterns of industry evolution. In the paper, the NK model acts as an intermediary means to get from reality to theory-building (through its use in a process that appeared to mimic real outcomes). The first issues is that a more applicable model (e.g., the NKC simulation) for the phenomenon was not used. The second issue is that a less-confounding explanation was available (Ganco et al., 2020). These issues raise legitimate suspicions over the conclusions reached. Main Issue: Their NK model does not capture the theory being tested (e.g., here, it does not actually model effectuation logic’s five parts) nor the contexts in which it is being tested (e.g., known versus risky versus uncertain landscapes). Thus, any findings of support for the theory’s robustness may be misleading even though they appear legitimized by appearing in a top specialty journal.
When translating from reality to theory there are simplifications needed to capture the main elements of the phenomena, but many of the simplifications of an NK simulation are ill-fitting to managerial–strategic phenomena.
Feeding the results of such a simulation to another simplified model of reality (i.e., Cournot competition) may actually amplify those simplifications (i.e., about capturing reality with sufficient accuracy) without proper understanding of those interactions.
The legitimacy of the NK model methodology—as alluded to through citations—was leveraged to justify its use in their application to theory-testing in this instance.
The reviewer pool—perhaps thin in terms of the overlap of expertise in both NK modelling and effectuation required—failed to pick up on important issues even when the explicit coding provided clearly revealed what was modelled in what way and what was not.
Consider some of their NK model simplifications: N is constant (but in the real world the list of product characteristics and process steps usually increases over time); search is mostly local (but can involve the limited imitation of the best rival) and is costless (whereas in the real world, no search is costless, and imitation can violate intellectual property protections); search is constrained in artificial ways; changes are done gene-by-gene and are costless (whereas in the real world, changes often affect more than one element and the costs reflect that); firms only participate in the industry when profitable (whereas in the real world, especially for new firms, this is unusual at least in the short-term); competition is solely cost-based (whereas in reality, most products are not commodities); firms face no entry or exit costs (but such costs exist in reality, even in Cournot models through fixed costs); firms can alter scale instantaneously and without cost (which is unrealistic other than for digital goods); and, searches are perfectly accurate (whereas in the real world, firms spend resources to spread disinformation, especially about profitability and imitability, to generate causal ambiguity).
The applicability of their model specification hinges on the assumption that the interaction dynamics do not alter the shape of the production function; instead, it only shifts the function in terms of costs (Ganco & Hoetker, 2009)—this appears unrealistic in most industries of interest past the short-term.
The simulation did indicate one thing clearly—i.e., that a more flexible decision rule (i.e., one held for fewer periods) outperforms a less flexible one when confronting a changing landscape (unless the firm—through its rule—can predict that landscape’s peaks with high accuracy).
The authors did not model effectuation (or causation) as defined multi-dimensionally in that stream. They did not model planning. They did not model risk. They did not model uncertainty.
So, the translation from theory to reality through the NK simulation was faulty, as neither the theory nor the reality was captured correctly in the model coded in their paper.
Unaddressed Questions: What does a stable landscape have to do with risk? What does an uncorrelated change of landscape have to do with uncertainty when that change relies on a random draw from a uniform distribution? Why aren’t the real intermediate benefits associated with planning (e.g., improved accuracy and efficiency of future actions) and the real intermediate costs associated with flexibility (e.g., retraining expenses, and penalties for being caught under-capacity) captured in the (intendedly realistic) simulated testing of the alternative logics?
The patterns generated by their two-step process model were ex ante predictable, making the actual simulation and its description redundant. The patterns included: (1) continued but declining improvements in efficiency over time—that is what evolution, in general, promises and that is an artifact of an NK model; (2) industry output increasing at a decreasing rate—with dq/dc < 0—but this must happen as c decreasing implies q increasing, that following from (1) and so is another artifact of the NK model when feeding the Cournot model; (3) prices steadily declining at a decreasing rate—this follows from (2) and a downward-sloping inverse-demand curve;
(4) an industry participation pattern of rapid entry followed by mass exit, leading to a shakeout and a stable number of competitors—with that stable competition resulting from the imposed constraint on being profitable to enter and from the way entrants are seeded, with the rapid over-entry being due to initial inefficiencies and homogeneous search skills and random initial assignments, and with exit due to stable demand and an imposed relative profitability condition [all following from evolutionary processes that allow quick entry and exit and limited capacity]; and, (5) these patterns are solely related to the interconnectedness of the technological solution—to the K in the NK, but this is not necessarily true, because the patterns are connected to the complexity of landscape, not to K itself. This is an issue, because the landscape can be affected by K, but also by N (when K > 1), by the allowable levels of elements in N, and by the forms of the K-functions.
When there is no real post-publication correction process in the journals that publish such NK simulation-based theory-testing studies, even when editors are made aware of the problems, what can be done to correct any misleading results?
If such journals do not provide a dialogue outlet, and would not publish a replication of such a study (given the issues deal with a problem that is not about the data-gathering but instead about accurate coding), what is to be done in a management field to improve the role of theory and our role as diligent scholars to correct issues when we find them?