- Open Access
Balancing the perceptions of NK modelling with critical insights
Journal of Innovation and Entrepreneurship volume 11, Article number: 23 (2022)
NK models are agent-based simulations of market evolution generated through new entry and firm innovation, and are often focused on better understanding complex interdependencies in organizational phenomena. We provide a counterpoint to the mostly optimistic descriptions of the advantages and of the requirements for these fitness-landscape-based analyses. We do so by offering a comprehensive list of limitations of that modelling technique as well as a critical analysis of two recent applications of the NK model—one of a theory-testing application and one of a theory-building application. Our analysis reveals that when care is not taken to capture the essential parts of a phenomenon, the NK approach may be unnecessary at best, and misleading at worst. We discuss the implications of these analyses and update past suggestions for future uses of NK simulations in organizational research.
We offer this commentary on NK landscape-based modelling to balance the overly rosy picture of this unique type of research method (Wall, 2016) that currently exists in the literature (e.g., Csaszar, 2018). We offer the first explicit list of the limitations and dangers of NK landscape models. We also illustrate in detail what can go wrong when such modelling is misapplied by analyzing two examples of research that have appeared in high-quality outlets. It is important to consider the downsides of this methodology, because, even though there are over 70 published studies using the methodology since 1997 (Baumann et al., 2019), with such research continuing to appear in top journals, there remains a lot of misunderstanding on how it works. The main reason for such misunderstanding is that it is a complex and unfamiliar methodology, so personal experience using it is indispensable to properly review any contribution that applies it. However, unfortunately, such experience is not generally provided in doctoral programs (Baumann, 2015), so having such work properly reviewed and understood can be challenging. This commentary exists to clear up misunderstandings. Specifically, we wish to balance the perspective that the NK-based methodology has only led to positive and significant contributions to the relevant literatures in innovation, entrepreneurship and organizational design. We do so by offering a less positive counter-perspective, and one that concludes that its role should be delegated to acting as a complement rather than as a substitute to more traditional research methods (Wall, 2016).
As Fioretti (2013) explained several years ago, the NK-based methodology offers an agent-based simulation used for (pseudo-) quantitative research in organizational studies. It is known for providing tunable rugged landscapes that coded agents traverse in search of local and global fitness optima in an evolving manner. Originally conceived for theoretical biology (e.g., Kauffman & Weinberger, 1989), it has existed in its current form for over 30 years. As a tool of research, it has several advantages in specific applications, because it provides precise control over factors, a stable evolution of the agent population to greater fitness, and the ability to collect a vast amount of data based on thousands of runs involving hundreds of interdependent agents, all at a very low relative cost. It has been established as a legitimate methodology by famous scholars at famous institutions, starting with its application in the natural sciences, where the theoretical laws of primary interactions are stationary (unlike in business). However, as with all methodological tools, the NK model can be misused or stretched in use. Such stretches are especially problematic when such a tool crosses into new fields, such as management, innovation and entrepreneurship—fields that all involve social phenomena that often entail much different interactions than found in biology. Unlike many other tools in business, however, the reviewer pool for NK simulations is relatively shallow in our fields, given it takes substantial extra training to understand and deconstruct these models and their mechanics. Due to that shallow reviewer pool, the possibility for greater tolerance of the misuse and stretch of this tool may be higher than for other methodologies. Regardless, NK modelling remains a powerful and growing niche of research in business.
Why is that important? It seems self-evident, but worth stating that the question of whether or not a research method is applied properly is one that can significantly affect a study’s results and prescriptions. For example, if the NK model is incorrectly applied, then any observed outcomes, along with the pseudo-empirical supports and the policies that follow them, are all likely to be erroneous as well. Such new methods should not be thought of simply as involving nothing but upside value, but should also be considered as potentially dangerous solutions that, if applied incorrectly, can mask the damage they can do. Our contribution is to offer a counterpoint to optimistic descriptions of the NK model that currently exist, so that researchers and readers can take the results from NK model-based analyses with much greater and justified skepticism.
Methods and background
The method we use to provide our counterpoint is critical and constructive analysis of the NK model approach. That analysis is first aimed at the standard NK model and leads to a comprehensive list of its limitations. The analysis is then aimed at two illustrative recent examples involving issues with NK model applications and the efficacy of the results. To move forward, we also add some advice for improving upon future NK model-based research.
The NK model and its two most powerful uses
We recap the basics of the NK model methodology prior to describing its powerful applications. In an NK simulation model, each automaton (aka agent, player or firm) is represented by a string of N genes. For simplicity, the usual coding is binary for each gene (it has a 0 or 1 value). For each string, a fitness score (scalar) can be computed from a function (usually additive) that involves the value of each gene combining with the values of its K neighboring genes. A landscape can then be calculated from all possible strings and their corresponding fitness values; the smoothest occurring when K = 0 and the most rugged when K = N-1. One can then visualize an agent with a specific string innovating so as to alter their string to move higher in their fitness landscape (where higher means better—e.g., being more profitable). The evolution of the population of these agents occurs by coding how they can alter their genes (with varying constraints and/ or costs) as each agent explores (and/ or imitates) a restricted set of local (or rival or random) gene variants. These agents do so simultaneously and independently. In addition, it is standard to replace the agents having the lowest fitness with new agents, each of which is provided a new string of randomly assigned gene values. Over time, given the coded-in ultimatum to increase fitness, the population moves towards stability at local or global optima (i.e., as embodied by the higher points in the rugged landscape).
It is important to note that the coupling of attributes in the landscape and across each agent’s gene string (denoted by K) does not capture the complex interdependencies inside the firm with which a manager or entrepreneur must deal. Instead, the standard NK landscape used in these studies refers to a technical set of binary choices over a firm’s nature (i.e., the steps in a process it uses, or the attributes of a product it sells) with a static, universal-to-all-firms optimal innovative design (of the process or product) that can be exogenously pre-determined by the coder (but not by the simulated firms). It is also important to note that the NK simulation does not actually provide an intuitive 3D (x–y–z) landscape to traverse virtually, although it is almost always depicted as such. Instead, each location [for N > 3] is actually a corner on a difficult-to-visualize N-dimensional hyper-cube that projects payoffs outwards. With those basics of the NK model methodology covered, we can now consider its relevant applications.
The two most powerful uses of the NK model in entrepreneurial, innovation and management research involve activities supporting theory-building (Baumann, 2015; Wall, 2016). In the first use, the NK model provides a means to test or extend an existing theory. In the second use, the NK model provides a means to induce a new partial theory. Each use embodies a mediating role between organizational reality—real-world observations or practices—and organizational theory—abstractions of the key relationships among factors that are believed to drive the outcomes observed. In each use, the NK simulation generates a lot of life-like data in a controlled and particular context. When the agents are coded to act as if they are following the principles of an existing theory, doing so in a specific and difficult-to-replicate-in-reality context, then an empirical analysis of the simulated outcomes can provide some support for a theory’s predictions. Alternatively, when an existing NK simulation is tweaked and added to in new ways, where some of these experiments provide simulated outcomes that mimic real-life-observed patterns, then the driving forces represented by those tweaks can be assessed to see if they provide a coherent alternative explanation (aka a proposed new induced partial theory) for that seemingly mimicked phenomenon.Footnote 1
The reason why such mediating roles are sought in social science fields such as innovation management and entrepreneurship is that methodologies such as NK modelling can provide what-looks-like-data when real-world data is much more difficult or impossible to obtain. In the social sciences, we cannot always feasibly experiment on our phenomena-of-interest directly (e.g., it is impossible to run a controlled, repeated experiment on the economy to see how it reacts to a number of alternative entrepreneurship-promoting policies). Therefore, we turn to indirect ways—and that means using models. Sometimes a formal model can provide insights into outcomes through logical arguments. Sometimes a closed-form mathematical model can provide optimizations and explanations for more complex systems of relationships and their outcomes. However, when the phenomenon cannot be properly modelled in a box-and-arrow form, or with solvable mathematics, then more sophisticated methods such as simulations can be effective. Such computational simulations generate sort-of-real data, especially when the simulation is calibrated to a set of real observations for specific conditions and when the coding is based on accepted theoretical premises. In the best cases, a simulation acts as a scaled-down and simplified model, just as a wind-tunnel provides a small and controllable model of real fluid dynamics, although at a different Reynolds number and enclosed by tighter boundary conditions. Here, the NK simulations are the wind-tunnels that can provide insights on the performance of existing organizational, product and process designs under new conditions (i.e., to test existing theories under various new constraints) as well as on possible new drivers of established, recognized outcomes (i.e., to induce new theory based on possible new inputs into processes that are controllable and visible in a simulated world).
Research scholars in entrepreneurship, management, process innovation and strategy have indeed attempted to use the NK model in such ways. It has been used to translate between reality and theory (i.e., to identify potentially new simulated drivers of outcomes that produce near-real-world observations) and between theory and reality (i.e., to test whether theoretical predictions generate their expected outcomes in a simulated realistic economy) when real phenomena were hard to control, idiosyncratic, adaptive, subject to deception, and so on.
The option of using the NK model in such mediating roles is one of the more attractive choices for several reasons: it is inexpensive. It generates a magnitude of data for high statistical power, including data that can easily be used to visualize the evolution of fitness and identify the eventual equilibria under different assumptions (e.g., assumptions about the firm’s internal coupling of processes, and about the firm’s types of interactions with its environment). In addition, it looks like hard science. It does so, because it involves coding (i.e., with explicit statements of the model assumptions), mathematics, and many principles established in the natural sciences (e.g., evolution); in addition, it carries legitimacy from being a method established in a hard sciences.
We now describe the outcomes of our analysis. Unfortunately, it appears that the application of NK models to business research has, at times, failed. However, this has not been sufficiently acknowledged and analyzed. To address that deficit, we explain several reasons for such failures, first by critiquing the NK model generally, and second by critiquing two specific but representative example applications.
Limitations of the standard NK model in research
In Table 1, we list the main reasons as to why the depiction of business reality through an NK model can be poor. Note that it should not be surprising that it can be poor, given the NK simulation methodology was not created to model managerial and entrepreneurial phenomena. The NK model was not written to capture human-designed organizational behavior or structure, nor was it written to depict how such organizations innovate and compete. In addition, it was not written to track interdependent, multiple performance measures but rather to assess only one landscape-terrain-shape-defining payoff output (i.e., the fitness scalar). Instead, the NK model was written to describe groups of similar individual entities with the same potential capabilities and constrained by the same length of one genetic string competing on an inert and stationary landscape independently, where survival can be relatively (and often absolutely) dependent on just one instantaneous performance measure.
In Table 1, we provide two dozen concerns about the basic NK modelling method. While we acknowledge (in the table) that there exist individual exceptions to many of the specific critiques, the existence of such exceptions actually reinforces the point we are making in this commentary. If the modifications we suggest are important enough to be published (in the pieces that we consider exceptions), then the critiques of the unmodified base model must hold significant validity. In addition, if those modifications continue to be exceptions—if they are not systematically adopted in each future version of the updated-new-base-model—then that provides an additional challenge to the efficacy of a (insufficiently changing) methodology that continues to be applied to business phenomena.
Every formal modelling method involves the sacrifice of realism (McGrath, 1981), and the NK model methodology is no exception. However, our concern is more nuanced, and is offered to counter-balance the often unquestioned portrayal of NK models as providing only legitimate data-as-evidence in their applications. We raise the possibility that such data can less-than-reliable when, for example, poor coding fails to capture important aspects of the phenomenon being studied. The sacrifice of realism is only warranted when the model’s product is useful either in the abstract or the real, and that is not always the case with this often less-understood methodology. As such, we see it not as an independent solution to better understanding complex phenomena, but rather as one of the complementary research methods that allows researchers to engage in a process of strong inference when confronting such phenomena with a mix of methods (Platt, 1964).
As Table 1 details, there are many concerns with a sole reliance on the NK model methodology. For example, there exist fundamental incompatibilities with its application to organizational phenomena (e.g., to the characteristics of the actors, optimizations and interactions involved) that severely limit how effectively it, alone, can capture any real focal research issue. In addition, even if that methodology could capture a real phenomenon in any given run of a simulation, verifying that that specific model run’s variable values were applicable to a given manager’s specific decision problem is generally impossible.Footnote 2 Instead, the power of the NK simulation most often emerges from the visualizations of the main outcome patterns that are revealed over thousands of specific runs as one focal factor value at a time is varied to see its average effect. The problem with such powerful visualizations is that any one individual organization is not represented by a de-noised version of an average firm in a context, where all other factors are held constant. Rather, in reality, each firm is unique in space and time, and that uniqueness often drives its relative performance as well as the identity of what action is best for a manager to choose in that particular dynamic situation.Footnote 3 The average pattern does not correspond to any one firm’s choices and performance, so extrapolating from the average—in the midst of likely contingencies—provides little to no guarantee of improved performance.
Regardless, the NK approach certainly looks like an attractive tool with a history of well-respected users. It is a methodology that adds to the diversity of our approaches for addressing complex problems, and that is valuable. It provides a cheaper way to generate data than surveying real firms struggling with real decisions. In addition, it can shorten the descriptions of real and complex problems by referencing foundational and related NK modelling in business (e.g., Levinthal, 1997).Footnote 4 Despite its many legitimate advantages, the application of this methodology is often flawed, as we point out in the two examples below.
Critiques of illustrative example NK model applications in business
Table 2 provides the main critical issues and faults relevant to two specific NK modelling method applications capturing its two most powerful uses. One example application relates to theory-testing; the other relates to theory induction. In the first example—a recent piece in entrepreneurship—Welter and Kim (2018) use the NK model to test a focal theory (i.e., of effectuation—see Sarasvathy, 2001). They employ the NK model to determine whether that theory’s prescriptions perform better than alternatives across specific conditions. They code simulated firms to traverse an NK landscape using decision-making logic based on the focal theory’s prescriptions, and compare the outcomes to when a different set of prescriptions, based on an alternative decision-making logic (i.e., of causation), are coded. They also code landscape shifts—i.e., sudden alterations in how fitness is computed from the N genes—to test the performance differences of those prescription sets across those various contextual conditions.
Using an NK model to test a theory requires, at a minimum, that its coding faithfully captures the original theorizing (Fioretti, 2013).Footnote 5 Here, it also requires that the coding accurately depicts the various competitive landscape shifts. Unfortunately, in the paper, none of these are captured accurately: the focal theory’s relevant components are not all coded (i.e., many defining characteristics of the focal decision-making logic of effectuation are missing, such as the concept of affordable loss). In addition, what characterizes the various landscapes is not what is stated (e.g., risk is not captured by a static landscape but rather by a set of possible known outcome states and their known probabilities of occurrence). As such, the theory is not actually tested. Thus, any apparent support for it is spurious. In addition, we understand why this could have occurred. It is very challenging to code a multi-faceted theory or an informationally complex context (e.g., involving a particular type of uncertainty) when the basic toolkit of this method was never intended to capture either. Such limitations perhaps should have been more thoroughly discussed. Furthermore, what was not accurately captured in the model needed to be made more explicit, and any support from the model made more conditional.Footnote 6
In the second example—a less recent paper on process innovation that has received mostly positive reaction in the NK model review pieces—Lenox et al. (2007) use the NK model to generate data patterns that mimic real observations of industry evolution. Those patterns are based on a then-proposed-as-new-to-the-literature set of driving forces, some of which are coded in an NK model. The relationships emerging among the forces are leveraged to build a new partial theory. Their NK simulation produces a coded output indicating a firm’s per-period production cost level, which is then fed into a one-period Cournot-competitive industry model, to provide a measure of fitness for the NK simulation. The two-part process is then repeated so that the population evolves. This generates patterns over time of each firm’s costs, of the market’s prices, and of the number of participating firms. That complete data-generation exercise is then repeated for cost functions entailing differing levels of production factor-interdependency captured by the landscape-defining dimension K. The analysis of the array of patterns is described and arguments are made that the industry evolutions depicted emerged from a new set of drivers.
Unfortunately, there are several issues with their application of the NK-based approach. First, the most appropriate form of the NK model (i.e., the NKC form—a form that directly accounts for rival firms affecting the focal firm’s landscape) was not used. Furthermore, the choice not to use a NKC model was not actually justified by them (Ganco et al., 2020). Second, running the simulation was not even required, given the standard NK simulation evolution output pattern was known, as were the reaction functions in Cournot models to changes in variable costs and in the number of competing firms (and, especially where the assumption of the kind of cost shift possible was unrealistically restricted—Ganco & Hoetker, 2009). By simply stating all of the assumptions of the NK and the Cournot models and how they were linked, it would have been straightforward to logically, deductively explain and predict the outcome patterns produced in the simulation.Footnote 7 Third, there was a failure to review the literature available at that time that had already established the NK model as an alternative explanation for observed industry evolution patterns (e.g., Huygens et al., 2001; Martin & Mitchell, 1998). Inducing a proposed new theory from an NK simulation’s data appeared forced in this case (because the best model was not applied and the reasons why the modelling assumptions didn’t obviously and directly drive the outcomes were not provided). Furthermore, the interpretation of the simulation’s data from the visual patterns produced scrubbed away the majority of variance in firm-level outcomes, such that the real-world factor interdependencies underlying co-evolution among suppliers, and between supply and demand, were largely lost. The point arising from this critique is that the NK model methodology can be applied unnecessarily or improperly, and that can lead to questionable results or under-justified partial theorizing.Footnote 8
Discussion and suggestions for improvement
This commentary complements and counter-balances the works by Csaszar (2018), Fioretti (2013) and others that have optimistically described how NK models can aid management, entrepreneurship and innovation research. It does so by describing the limitations of those models and the downsides from their mis-applications. At a high level, our general and specific critiques of the use of the NK model as a standard mediation-type approach in theorizing point to the possibility that it may not be the answer for theory-testing or theory-building. In fact, the examples point to the likelihood that using the NK model alone may, in fact, be inappropriate and produce poor results. On reflection, it seems overly optimistic to have expected that a model based on the mechanics of biological evolution would accurately capture the strategic challenges that entrepreneurial managers face in their real and idiosyncratic organizations.
Fioretti (2013, p. 233) implies three basic guidelines for when the NK model is more appropriate: (i) when the structure of interactions between social actors matters; (ii) when overall organizational behaviors arising the bottom-up out of interacting actors matters; and, (iii) when out-of-equilibrium dynamics matter. We are more restrictive in our updated guidelines for when the NK model is more appropriate: (i) when the focus is on the difference between local and global searches; (ii) when the focus is on population-level (and not individual-level) outcomes that evolve only from indirect interactions among routinized firms/ actors; and, (iii) when the focus is on the differences in patterns of a forced evolution towards greater fitness induced by the variation of specified factors.
Fioretti (2013, p. 235-6) also suggests further guidelines for NK model application regarding validation, specifically: (i) for theory-testing—that the model be faithful to the original theory; and, (ii) for the imitation of observations—that the model be authenticated at both the individual and aggregate levels of behavior. In our critical analyses of the two examples, those guidelines were not adhered to. In the first example, the original theory was not captured correctly. As such, that paper’s suggestion that it was properly tested was inappropriate, as was the suggestion that it was tested under specific conditions (given those conditions were not accurately captured either). In the second example, observational authentication was a concern, because the paper’s overall simulation forced together two incomplete models at different levels of behavior. Specifically, the NK model does not capture firm-level production scale, and the one-shot Cournot game model does not capture aggregate-level dynamics. As such, the second guideline for validation appears missed for the NK part of the simulation. In terms of agent-based models, it would have been more legitimate to start with the application of the most suitable model (i.e., an NKC variant), or at least one where the agents evolve and compete together on the same landscape. Having evolution occur on one landscape that involved low-rationality decision-makers while competing on a different one that involved high-rationality decision-makers was problematic for validation in that second example.
Those guidelines aside, we do understand that modelling involves a paradoxical challenge of balance—capturing reality while abstracting away from it. At their best, models focus on a few key factors for a specific research question to provide new insights and generalization. At their worst, models misinform and lead to worse decisions. We believe, however, for new modelling methods, such as NK simulation, that there is a greater onus on researchers in terms of proving that such balancing is being pursued properly. Without that onus, there are two big dangers: the first is over-extension of this method, perhaps akin to where someone with a hammer sees too many things as nails. The application of this new method calls for restraint and careful choices. The second is that alternative models—those written specifically for managerial or entrepreneurial or innovation problems—will be crowded out by applications of this method that are under-modified away from contexts that are not biological (Baumann, 2015). That being said, we do recognize that some recent NK modelling has improved to address some of its past limitations. For example, Gavetti et al. (2017) model allows landscape shaping by firms; but such work appears more the exception than the rule so far. Thus, at this stage, we believe that this method remains a better complement to traditional methods than a substitute.
We have explored the limitations and critiqued recent example applications of the NK model methodology to justify our conclusion. Our conclusion is that, as a stand-alone method, the NK model lacks evidentiary substance, but it remains an effective supplement to the more traditional methods of empirical analysis and mathematical–logical analysis. We hope that our analysis of the generic NK model has provided a balance to the mostly positive and uncritical descriptions of what that method involves, so that audiences who are not experienced with coding it can better understand its limitations and the premises upon which it is based. We also hope that our commentary will help those advocating that method to update and improve its minimal model specifications in the future. We hope that the detailed examination of two example applications of the method highlights important concerns, and that that leads to more careful use of the method and more scrupulous pre-publication reviewing of such research. The conclusion that this new—arguably third main evidentiary method of research—has severe potential downsides (that have not previously been fully listed and exemplified) is worth repeating because of its relevance to our business fields. The phenomena we often study are not always easy to gather data for and, so, the attraction of using the NK model methodology to provide pseudo-empirical results may be high. This commentary provides a way to assess that option, a caution for what issues may arise, and some advice about what modifications to the base model should be considered (i.e., where the onus is on the researchers to prove the method is both necessary and suitable to their specific application). We hope that this commentary leads to clearer appreciations and uses of all newer pseudo-data-generating methods in the future, and better understandings of our entrepreneurial and innovative phenomena of interest.
Availability of data and materials
As alluded to, we consider the NK model as a useful tool, but more as a complementary rather than a validly independent methodology. In that complementary role, it has value in testing boundaries, discovering discontinuities (and other unusual nonlinearities) in existing theories, and in testing the robustness of proposed possible real-world measures, all when data is not easily available (e.g., Ganco, 2017; Wall, 2016).
Note that it is unusual for any NK model’s parameter values to be synchronized to those in the real world; it is unusual because such parameters – e.g., organizational genes – almost never exist.
One irony about NK modelling is that the core local experimentation process that is hard-coded into the decision-making of the automaton firms actually describes a better approach for real managers to take in most exploratory contexts than any approach based on the analysis of the model’s output patterns emerging from those simulated experiments.
That said, sometimes this shortening is inappropriate because it can cut down on much needed debate over the necessary modifications to any referenced foundational model that is the basis of an application of the method to a new organizational phenomenon. Also, note that every new method eventually gets stretched outside of its applicable bounds. So, it should not be surprising that that has happened with the NK model methodology. However, in its case, we suggest that those bounds are quite tight, and mostly limited to making specific points about evolution-based patterns arising from variations in organizationally-bound structural categories (e.g., internal coupling) or in contextual characteristics (e.g., local versus global search), as Nelson and Winter (1982) and others have shown in their versions of evolution-driven simulations.
It would have been prudent to first test whether the focal theory’s own full logic was internally consistent when fully coded prior to testing it against other logics. Unfortunately, that was not done.
That raises the potential for a new form of flawed study, one that cannot be traditionally challenged by a replication because the theory claimed to be tested is not actually tested. Editors struggle to handle such cases when there are not dialogue outlets provided in journals that help to identify and discuss such issues. When such papers with these kinds of flaws are not tagged and discussed post-publication then such studies are effectively condoned, cited and even repeated, potentially harming the research integrity of our fields.
If the coded assumptions of the simulation predetermine the outcome then there is no need to run it (but when that is not so, as with chaotic systems, the analysis should precede the theorizing – Gleick, 2011).
Lenox et al. (2006)'s conclusions were considered by Lenox et al. (2010) and then by Lee and Alnahedh (2016). Lenox et al.'s (2010) support of their own paper’s results referred more to its empirical verification of the well-known U-shape pattern involved and not to their own NK modelling itself. This is despite the fact that Lee and Alnahedh (2016: p286) state that Lenox et al. (2010) did not sufficiently provide support even to the U-shape conclusion. That confusion aside, this set of papers also reinforces the concern that the NK methodology’s circle of scholars is much too tight, as Lenox was the senior editor at the journal who guided and accepted the Lee and Alnahedh (2016) paper. Such a fact raises a conflict of interest issue that needlessly puts their support of the Lenox et al. (2006) paper in question.
Adner, R., Csaszar, F. A., & Zemsky, P. B. (2014). Positioning on a multi-attribute landscape. Management Science, 60(11), 2794–2815.
Albert, D., & Siggelkow, N. (2022). Architectural search and innovation. Organization Science, 33(1), 275–292.
Baumann, O. (2015). Models of complex adaptive systems in strategy and organization research. Mind & Society, 14(2), 169–183.
Baumann, O., Schmidt, J., & Stieglitz, N. (2019). Effective search in rugged performance landscapes: A review and outlook. Journal of Management, 45(1), 285–318.
Conlisk, J. (1996). Why bounded rationality? Journal of Economic Literature, 34(2), 669–700.
Csaszar, F. A. (2018). A note on how NK landscapes work. Journal of Organization Design, 7(1), 1–6.
Csaszar, F. A., & Levinthal, D. A. (2016). Mental representation and the discovery of new strategies. Strategic Management Journal, 37(10), 2031–2049.
Csaszar, F. A., & Siggelkow, N. (2010). How much to copy? Determinants of effective imitation breadth. Organization Science, 21(3), 661–676.
Fioretti, G. (2013). Agent-based simulation models in organization science. Organizational Research Methods, 16(2), 227–242.
Ganco, M. & Hoetker, G. (2009). NK modelling methodology in the strategy literature: Bounded search on a rugged landscape. In: Research methodology in strategy and management. Emerald Group Publishing Limited. https://doi.org/10.1108/S1479-8387(2009)0000005010
Ganco, M. (2017). NK model as a representation of innovative search. Research Policy, 46(10), 1783–1800.
Ganco, M., Kapoor, R., & Lee, G. K. (2020). From rugged landscapes to rugged ecosystems: Structure of interdependencies and firms’ innovative search. Academy of Management Review, 45(3), 646–674.
Gavetti, G., Helfat, C. E., & Marengo, L. (2017). Searching, shaping, and the quest for superior performance. Strategy Science, 2(3), 194–209.
Gavetti, G., & Levinthal, D. A. (2000). Looking forward and looking backward: Cognitive and experiential search. Administrative Science Quarterly, 45(1), 113–137.
Gleick, J. (2011). Chaos: Making a New Science (Enhanced Edition). Open Road Media.
Hannan, M. T., & Freeman, J. (1977). The population ecology of organizations. American Journal of Sociology, 82(5), 929–964.
Huygens, M., Van Den Bosch, F. A., Volberda, H. W., & Baden-Fuller, C. (2001). Co-evolution of firm capabilities and industry competition: Investigating the music industry, 1877–1997. Organization Studies, 22(6), 971–1011.
Jain, A., & Kogut, B. (2014). Memory and organizational evolvability in a neutral landscape. Organization Science, 25(2), 479–493.
Jaynes, E. T. (2003). Probability theory: The logic of science. Cambridge University Press.
Kauffman, S., & Weinberger, E. (1989). The NK Model of rugged fitness landscapes and its application to the maturation of the immune response. Journal of Theoretical Biology, 141(2), 211–245.
Lee, G. K., & Alnahedh, M. A. (2016). Industries’ potential for interdependency and profitability: A panel of 135 industries, 1988–1996. Strategy Science, 1(4), 285–308.
Lenox, M. J., Rockart, S. F., & Lewin, A. Y. (2007). Interdependency, competition, and industry dynamics. Management Science, 53, 599–615.
Lenox, M. J., Rockart, S. F., & Lewin, A. Y. (2010). Does interdependency affect firm and industry profitability? An Empirical Test. Strategic Management Journal, 31(2), 121–139.
Levinthal, D. A. (1997). Adaptation on rugged landscapes. Management Science, 43(7), 934–950.
Li, C., & Csaszar, F. A. (2019). Government as landscape designer: A behavioral view of industrial policy. Strategy Science, 4(3), 175–192.
Martin, X., & Mitchell, W. (1998). The influence of local search and performance heuristics on new design introduction in a new product market. Research Policy, 26(7–8), 753–771.
McGrath, J. E. (1981). Dilemmatics: The study of research choice and dilemmas. American Behavioral Scientist, 25(2), 179–210.
Nelson, R., & Winter, S. (1982). An evolutionary theory of economic change. Belkap Press.
Platt, J. R. (1964). Strong inference. Science, 146(3642), 347–353.
Posen, H. E., & Martignoni, D. (2018). Revisiting the imitation assumption: Why imitation may increase, rather than decrease, performance heterogeneity. Strategic Management Journal, 39(5), 1350–1369.
Puranam, P., Stieglitz, N., Osman, M., & Pillutla, M. M. (2015). Modelling bounded rationality in organizations: Progress and prospects. Academy of Management Annals, 9(1), 337–392.
Rahmandad, H. (2019). Interdependence, complementarity, and ruggedness of performance landscapes. Strategy Science, 4(3), 234–249.
Rivkin, J. W., & Siggelkow, N. (2003). Balancing search and stability: Interdependencies among elements of organizational design. Management Science, 49(3), 290–311.
Sarasvathy, S. D. (2001). Causation and effectuation: Toward a theoretical shift from economic inevitability to entrepreneurial contingency. Academy of Management Review, 26(2), 243–263.
Wall, F. (2016). Agent-based modelling in managerial science: An illustrative survey and study. Review of Managerial Science, 10(1), 135–193.
Welter, C., & Kim, S. (2018). Effectuation under risk and uncertainty: A simulation model. Journal of Business Venturing, 33(1), 100–116.
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Arend, R.J. Balancing the perceptions of NK modelling with critical insights. J Innov Entrep 11, 23 (2022). https://doi.org/10.1186/s13731-022-00212-9
- NK models
- Organizational science
- Theory building
- Theory testing