Skip to main content

Table 1 Analysis of inappropriate NK assumptions

From: Balancing the perceptions of NK modelling with critical insights

Questionable Assumptions for Managerial Phenomena Notes
Managers can only see better positions within a specific, local neighborhood (the standard small jumps restriction) [see below for when big jumps are allowed]
Fixed behavioral rules (oftentimes conditional) that are not modified in response to feedback (Baumann, 2015)
Makes more sense for genetic improvements than for business ones. Search cost functions don’t appear to affect this
There is empirical support for modelling local search as limited to the local neighborhood (e.g., Conlisk, 1996)
Is a very specific way to model bounded rationality; humans are more intelligent than the limited adaptive automata modelled (Baumann, 2015; Csaszar, 2018). The few exceptions to this restriction include mental model based searching (Csaszar & Levinthal, 2016; Gavetti & Levinthal, 2000)
The conditional part is affected by feedback (e.g., by a failure to improve—Csaszar & Siggelkow, 2010; by too slow an improvement—Csaszar & Levinthal, 2016), but the rule itself does not change
Managers can immediately act to exploit an identified better position It is unrealistic to model, for a simulation, that no constraints, frictions or delays exist for firms to alter a product or process
Managers cannot follow the steepest gradient for improvement; instead, they retain no path memory and simply jump to the next better location (if one exists). There is no learning (Puranam et al., 2015) Organizations and managers have memories (and path dependencies), and their consistency along a tactical path is usually expected for planning purposes. They learn
Alternative performance feedback responses—that may be more realistic—remain unconsidered
Few exceptions to this restriction, some involve explicit memory modelling (e.g., Jain & Kogut, 2014)
When big jumps are allowed, it is modelled as a random draw (from a uniform distribution) or as a costless and perfect imitation of a more successful rival (Csaszar & Siggelkow, 2010) Real firms cannot do random full transformations, let alone without frictions or extra costs (so why not model some restrictions?)
No direct costs to altering the DNA of the organization/ entity. All changes cost the same, and are equally effectively completed Change is costly in the real world, and dependent on the type, timing and technique of the change. Change is also often considered a source of competitive advantage (e.g., in the DCV), but that is ignored here
The total number of entities on the landscape remains constant. (Few exceptions—these include Adner et al., 2014.) Yes, this is easier to code, but the context here does not support it. The idea that all deaths are replaced by births (or potential entrants) is not realistic, nor are the implicit restrictions on firm growth in scale, or via franchising or buyouts in the NK simulation proper
New firms (births) have random DNA or imitate currently successful DNA This isn’t realistic for business. Such births would not get investment, because they have no expected advantage over incumbents
It is a simultaneous move game, where all other information (i.e., about the opponents, payoffs and random draw functions) is known with certainty. Any one firm can do what all other firms can do if it is in the same location This type of information set and this type of homogeneity is unrealistic in business. It can provide a benchmark in isolation, but with all the other assumptions added on, it stretches what relevance the simulated output provides. Furthermore, this assumption provides little room for managerial discretion or function at all
Often, knowledge of payoffs is inaccurate (Puranam et al., 2015)
Very few exceptions [e.g., Rivkin and Siggelkow (2003) include heterogeneity in firms in terms of the number of alternatives seen]
Travel across the landscape is only based on either pure path dependency (is continuous) or luck (is discontinuous) Where is the room for managerial strategy rather than simple heuristics in this model? It seems inconsistent to assume simple decisions to analyze outcomes so as to prescribe non-simple decisions
Heterogeneity in policies emerges as stable due to path dependencies in rugged landscapes, which may not be realistic for rational, informed decision-makers (Puranam et al., 2015)
Few exceptions (e.g., Csaszar & Levinthal, 2016 include a parameter for heterogeneity in landscape attribute attention that affects travel)
The initialization of the landscape and the initial population are based on random draws from a uniform distribution (i.e., for the DNA elements and for K-type interactions) Why is there no symmetry imposed for the K-type interactions among the same elements, and why are no population dynamics taken from related landscapes? Yes, it is easier to code and may maximize initial entropy (Jaynes, 2003), but those justifications are questionable in a business context, where structures do exist. The few exceptions to the base case appear to recognize that fact [e.g., Posen and Martignoni (2017), where the initial population imitates good performers; Albert and Siggelkow (2022) and Rivkin and Siggelkow (2003) control initial populations for specific characteristics]
The landscape usually remains fixed throughout the analysis
(There are exceptions for an NKC version of the simulation, and for models that test against shocking the system [by altering the landscape during a run] for checking the robustness of specific strategies)
Static environments are too abstract for modelling some important problems (Baumann, 2015; Ganco & Hoetker, 2009)
Payoff structures are exogenous (Gavetti et al., 2017)
The search space is exogenous (Ganco et al., 2020)
Where is the co-evolution of the environment with the players interacting with it? The C in NKC includes modelling of some effects of cooperation and competition on rival landscapes. In addition, the shocks can capture some meta-phenomenon effects. However, single landscape co-evolution is missing in the basic NK approach, although such co-evolution may be a more realistic accounting of many management phenomena
Some endogeneity of payoffs and the space is more realistic
Few exceptions exist [e.g., Rivkin and Siggelkow (2003) model some turbulence in the landscape; Gavetti et al. (2017), and Li and Csaszar (2019) include some limited ways to shape the landscape]
The survival rule is imposed with immediacy, eliminating the current lowest performers (with either certainty or high probability) Where would Amazon and other long-play-strategy firms [firms that did not report profitability for years] be in this model? Does it seem proper to exclude such major recent success stories with this approach?
Very few exceptions [e.g., Csaszar and Siggelkow (2010) do not eliminate firms]
Firms can engage in both local and distant search Why is it proper to assume this kind of ambidexterity?
Regardless if search impact comparisons are made, why not assume that there are specialists in each search type instead, as each is likely to involve different skills?
The most common steps repeated in the simulated timeline are: identify deaths, conduct survivor choice searches, replace deaths, allow action, calculate outcomes, and repeat. The steps lack interdependence (unless done in a NKC simulation, where there is a Stackelberg-like sequencing of move-countermove). This homogenizes search and action (e.g., in terms of efficiency) across all players, which is not realistic
Such steps highlight the ecological roots of a method that assumes such ordered and linear processes (Hannan & Freeman, 1977)
Models the on–off switches (0–1) in the DNA instead of qualitatively different choices for each factor in organizational management (A–B–C…).
N-dimensional binary vector optimization constitutes a strong abstraction from real world problems (Wall, 2016).
Easy-to-code, but misses the point of interior optima (and the tradeoffs involved among factor levels) that occur in the real world more than extreme optima (e.g., hitting boundary conditions)
Lack of external validity (Wall, 2016)
Very few exceptions to the two-level model (e.g., Rahmandad, 2019)
Changing one of the N-genes does not affect another directly; it affects payoffs through the K-function that involves the other genes. This does not seem realistic for product or process design alterations (e.g., with power supplies, platform choices, and so on), where the effects are seen in the realizations of the product itself rather than in its gross revenues.
The K-based function is not in a universal form, but entails a different sub-function for each subset of genes (and is not even symmetrical in those effects between gene pairs). This seems like an arbitrary choice of functional form rather than one that has parallels in business or engineering (and is inexplicably restricted to effects on the closest other genes without any check on why that closeness appears in the first place).
It is possible to have more than one global maximum (e.g., the neutrality modelled in Jain & Kogut, 2014). Often unrealistic in business (e.g., in standards wars).
Restricted to one dependent variable (DV)—i.e., the landscape-height-as-payoff fitness measure. Other DVs are important (e.g., speed to payoff) that are both simulation-based and reality-based (e.g., market share; brand; corporate social responsibility; carbon footprint; and, so on).
Real organizations face a concurrency of multiple and conflicting performance measures (Baumann, 2015; Puranam et al., 2015).
Very few exceptions that model multiple fitnesses (e.g., Adner et al., 2014).
Involves the trick of a 3D landscape representation of an N + 1-dimensional game, and the power of being able to envision both rough versus smooth terrains and the physical traverse of that landscape to higher ground While the analogy appeals well to basic human experiences and visual abilities, simplifying very complex competitive strategic management decisions in this way is likely to be misleading (and dangerously confidence-building)
It is possible—with non-structured models—to rig them to produce the desired results (Wall, 2016). NK models are often seen as a black boxes to most readers and especially to most practitioners. Model specifications are seen as idiosyncratic to the researcher (Ganco & Hoetker, 2009). Given the standard, structured NK model has limits and has been extensively used, modified models are becoming more popular. This increases the rigging worries because of the black box effect (from the unfamiliar modifications).
Behavioral rules are sometime then introduced in an ad hoc basis, without empirical validation, and sometime only based on stylized facts (Wall, 2016).
The analysis of the model’s substantial data output is done through extensive numerical derivation (e.g., filtering, smoothing, regression, and so on) to identify reported patterns. Low traceability of variance, such that reported data does not represent all outcomes (Wall, 2016).
Difficult to isolate complementarities (Ganco et al., 2020) that are of managerial interest, and which may be discoverable in reality.
Firms (as simulated agents) are not competing for resources with each other (Ganco et al., 2020). In reality, firms do compete for resources—horizontally and vertically.