Open Access

Laboratory experiments in innovation research: a methodological overview and a review of the current literature

Journal of Innovation and EntrepreneurshipA Systems View Across Time and Space20165:24

https://doi.org/10.1186/s13731-016-0053-9

Received: 28 January 2016

Accepted: 31 May 2016

Published: 13 June 2016

Abstract

Innovation research has developed a broad set of methodological approaches in recent decades. In this paper, we propose laboratory experiments as a fruitful methodological addition to the existing methods in innovation research. Therefore, we provide an overview of the existing methods, discuss the advantages and limitations of laboratory experiments, and review experimental studies dealing with different fields of innovation policy, namely intellectual property rights, financial instruments, payment schemes, and R&D competition. These studies show that laboratory experiments can fruitfully complement the established methods in innovation research and provide novel empirical evidence by creating and analyzing counterfactual situations.

Keywords

Innovation researchLaboratory experimentsMethodology

JEL-classification

C90L50O38

Introduction

Fostering research and innovativeness to support economic growth and increase competitiveness has become a central paradigm for policy makers worldwide in recent decades. The European Commission has recently reaffirmed this goal by committing to spend up to 3 % of the European Union’s GDP to support private innovation activity until 2020. By means of this and other policy instruments, the EU thus aims to become an “innovation union” (COM(2014) 339). This paradigmatic focus has been adopted by the scientific community, which similarly discusses the topics of innovation and industrial policy broadly, trying to obtain insights and provide advice to policy makers concerning the design of policy instruments that optimally foster innovation activity (Mazzucato et al. 2015).

Economic innovation research traditionally argues for government intervention in the case of market failure, which is characterized by the imperfect allocation of resources, for example, due to imperfect competition, information failures, negative externalities, public goods, and coordination failures (Bator, 1958). Given the political commitment to foster innovation activity, government interventions can provide remedies to market failures. For this purpose, several distinct methods of supporting private economic subjects in their innovation activities have been developed. Firstly, regulatory instruments such as rules, norms, and standards have been introduced, such as patents and copyright law. These regulations are compulsory for all economic actors and thus shape the overall market conditions for innovative products and processes. Secondly, financial instruments have been introduced to promote innovative activity, with examples including subsidies, cash grants, and reduced interest-loans, as well as disincentives like tariffs, taxes, and charges. Thirdly, there are “soft” instruments that include normative incentives such as moral appeals to economic actors and voluntary commitments like technical standards or public-private partnerships (Borrás and Edquist 2013; Vedung 1998).

To analyze and evaluate the effects and optimal design of these instruments, economic innovation research has established a large number of empirical research methods. Along with the overall expansion and professionalization of experimental economics, behavioral evidence collected in laboratory experiments have become a vital complement to economic innovation research in recent years. Following Sørensen et al. (2010) and Chetty (2015), we suggest that lab experiments constitute a promising addition to the methodological toolkit in innovation research, thus advancing novel insights and providing predictions and policy implications by incorporating behavioral factors. We thus argue that laboratory experiments should be used if they yield additional evidence unattainable by other methods in a particular field of study. This resonates with the arguments by Falk and Heckman (2009); Chetty (2015); Madrian (2014); and Weimann (2015), who propose a pragmatic approach concerning the use of evidence derived from experimental methods, arguing that all empirical methods should be viewed as complementary (Falk and Heckman 2009). In this paper, we aim to contribute to the growing field of experimental innovation research, firstly by outlining the advantages and limitations of different methodological approaches in innovation research and more specifically laboratory experiments. Secondly, since former papers have not attempted to summarize and structure the existing experimental literature, we provide a literature review of the existing experimental approaches to the field of innovation policy with the most important studies from four sub-fields in which lab experiments have been conducted to date. We conclude by emphasizing the further use of laboratory experiments to innovation research.

This paper is structured as follows: in chapter two, we outline the range of methods in economic innovation research, before discussing the scopes of the experimental method in detail in chapter three. Subsequently, we present a selection of laboratory experiments in the field of innovation policy, namely intellectual property rights, financial instruments, payment schemes, and R&D competition. A conclusion is finally provided in chapter four.

Methodological approaches in innovation research

A large number of research methods have been developed to analyze which policy instruments might best foster innovative activity. Weimann (2015, pp. 247–248) categorizes the different methods of generating insight by their features regarding their ability to identify causal relations, their generalizability to other contexts (external validity) as well as their broad applicability; particularly, the trade-off between causality and external validity is emphasized. Thus, Weimann distinguishes between (1) neoclassical models pointing out causal relationships, (2) “traditional” empirical research primarily showing correlations, (3) natural experiments attempting to substantiate causal relationships, (4) randomized field experiments that optimally offset the trade-off between causality and external validity, and (5) laboratory experiments providing a strong causality, yet lacking external validity. Figure 1 provides an overview of these methodological approaches and their features in a Venn diagram. The figure shows that none of the existing methods is able to fulfill all three features identified by Weimann (2015) but can only meet one or two criteria.
Fig. 1

Methodological approaches and their features. Note: the figure is based on the classification by Weimann (2015)

(1) Neoclassical models such as game theoretical or general equilibrium models have the advantages of enabling deriving causal relations and being easily applicable, yet they often lack external validity. Empirical investigations in innovation economics most commonly use the methods of (2) “traditional” empirical economic research, for instance, official patent statistics or micro firm-level data from surveys. For this, OLS estimations are considered appropriate to analyze and quantify observable variables of innovation processes; however, for dynamic effects, these methods often lead to problems of causality, endogeneity, and selectivity. A further shortcoming of using this form of data is that innovation surveys necessarily rely on the entrepreneurs’ willingness to voluntarily disclose information about their firm, which potentially biases the data. Furthermore, the extent to which government funding is actually used for research by the firms often remains unclear and the public funding decisions often lead to a selectivity bias, thus making public funding an endogenous variable, which establishes further dependencies between the respective variables (Busom, 2000). Moreover, patents and patent pools are often used as an approximation for the innovation activity to estimate the firms’ innovation output. This prompts a number of issues, for example, because small and medium enterprises use other forms of protecting their innovations and patent less than large firms, due to potentially expensive patent litigations and patent theft (Thomä and Bizer 2013). Nevertheless, this methodological approach to innovation research has strongly improved its data availability, methods, and research designs in the past 25 years, implementing methods such as difference-in-difference estimators, sample selection models, instrumental variables, and non-parametric matching methods (Angrist and Pischke 2010; Zúñiga-Vicente et al. 2014). Overall, this approach entails a high level of external validity and applicability but often only a low level of causality.

Another empirical means of evaluating policy instruments is (3) natural experiments, which feature a high level of external validity. Furthermore, due to improved methodological approaches, causal relations have substantiated in recent years. However, the applicability is often low, since it is difficult to find appropriate control groups that could enable a clear comparison (Weimann 2015).

It has been argued that the issues involved with using the “traditional” methods of empirical economic can best be solved by conducting (4) randomized field experiments in which real-life incidents are treated similar to experiments. They are considered the “gold standard” for evaluating new policy instruments as they enable identifying causality rather than mere correlations (Boockmann et al. 2014; Falck et al. 2013). As an example, Chatterji et al. (2013) suggest that the distribution of building sites in new industrial areas could be randomized, which would lead to better results in subsequent impact analyses of cluster policies. While optimally combining external validity and causality, randomized field experiments suffer from a lack of applicability as their adequate design is time-consuming, expensive, and often highly impractical; consequently, other methods are regularly preferred (Angrist and Pischke 2010).

(5) Laboratory experiments can be considered an alternative to overly costly and impractical field experimentation, combining a high level of causality with a high level of applicability. Despite the lower level of external validity, laboratory studies can be a valuable substitute for randomized field experiments and provide insightful new angles to research topics inaccessible through “traditional” empirical methods.

Since each method has its own strengths and weaknesses, the method used for a particular research question should be chosen depending on the object of research, the availability of data, and the possibility for conducting field experimentation. Overall, a mix of complementary empirical methods might thus be the most promising approach (Weimann, 2015). In the following, we focus on laboratory experiments, which are the most recent addition to the methodological toolbox of innovation research, including discussing their limitations and advantages.

Limitations and advantages of experimental methods

Although lab experiments can be transferred and used to derive relevant policy implications, there are systematic limitations to this approach. Critics of lab experiments such as Levitt and List (2007, 2008) emphasize the restrictions, while Falk and Heckman (2009) provide refutations.

Observation

Participants are observed and act in an artificial environment, which might influence their behavior due to expectancy effects and the experimenter demand bias. Barmettler et al. (2012) contradict this argument and show experimentally that complete anonymity between the experimenter and participants does not change the latter’s behavior. Furthermore, it is argued that close social observation is not limited to the lab but rather is a feature common to all economic interactions.

Stakes

It can be argued that the stakes in experiments are too low to induce realistic behavior in participants. Experiments with varying stake sizes yield mixed results depending on the experimental situation (Camerer and Hogarth 1999). However, Falk and Heckman (2009) ask how often people take choices involving sums equal to their monthly incomes and how representative such high-stake experiments would actually be. Consequently, they suggest that the average level of stakes in laboratory experiments correspond to the most common choices that individuals take.

Sample size

The sample sizes of lab experiments are criticized as being too small, although this is refuted such that sample sizes are stated to adequately correspond to this method and thus yield valid assertions.

Participants

Student participant pools are considered unrepresentative of the overall population. While this might not be a problem when testing theories, in the case of innovation experiments, other populations such as researchers or entrepreneurs might be more appropriate experimental participants, depending on the research question.

Self-selection

There is a self-selection bias since students with particular traits sign up for participant pools. Nevertheless, student pools ensure that the selection can be controlled and provide information on participants’ demographics, personal backgrounds, and preferences. Thus, the disadvantages connected to selection biases—which are potentially prevalent in field experiments as well as other empirical research methods—can be somewhat controlled.

Learning

Participants often cannot learn in experiments and adjust their behavior accordingly, yet this is also a prevalent factor in many economic interactions outside of the lab, as real-world interactions can often be considered as one-shot games with no chance of learning in repeated decisions. Furthermore, a large number of repeated games have been considered in experimental settings to determine learning effects, for example, Cooper et al. (1999) with regard to incentive systems.

External validity

Lab experiments are considered as lacking external validity, meaning that they produce unrealistic data without further relevance for understanding the “real world”: a criticism that holds true both for lab experiments and theoretic models (Weimann, 2015, pp. 240–241). The challenge in designing experiments is to establish the best way of isolating the causal effect of interest and thus providing insights about universally prevalent effects that transfer to other economic situations outside of the lab. In a recent study, Herbst and Mas (2015) show how well-designed experiments can ensure that individual behavior outside the lab is captured adequately, thereby gaining a higher external validity than traditionally assumed for laboratory studies. Further studies comparing laboratory and field evidence will have to show whether this might change the general perception of the external validity of lab experiments (Charness and Fehr 2015). However, in some research contexts, it might not be possible to substantially increase the external validity. In such cases, lab experiments can serve as a starting point to isolate clear effects of specific innovation instruments. Subsequently, these effects have to be investigated with other methods involving a higher external validity, e.g., field experiments in a firm. These methods then have to show whether the initial results from the laboratory hold in contexts outside the lab.

Generalizability

The lack of generalizability of behavioral patterns resulting from lab experiments that refrain from testing a theoretical model is criticized. While the arguments mentioned above reduce this problem, it remains a considerable drawback to some experimental evidence. Nevertheless, every empirical method faces this issue due to the unavoidable dependency of data on a specific context.

Overall, lab experiments entail several distinct advantages as they provide researchers with the means of deriving causal relations from controlled manipulations of specific conditions, while controlling all surrounding factors. This ensures precise measurements and makes it possible to preclude confounding effects such as multiple incentives or repeated interactions. The experimenter thus retains almost complete control of the decision environment, namely the material payoffs, the information given to participants, the order of decisions, the duration, and iterations of the experiment. Participants are assigned randomly, which reduces the selection bias. Moreover, they are incentivized monetarily for their decisions, whereby it can be assumed that decisions are taken seriously: “In this sense, behavior in the laboratory is reliable and real: Participants in the lab are human beings who perceive their behavior as relevant, experience real emotions, and take decisions with real economic consequences” (Falk and Heckman 2009, p. 536). The results are replicable and they allow investigating specific institutions at a relatively low cost. This can be particularly useful when considering exogenous changes like policy interventions and new regulations, where counterfactual situations can be created and their effects tested far more easily in lab rather than field experiments. With the possibility of altering only one factor—e.g., the patent regime—lab experiments allow analyzing the relevance of a particular factor without other factors confounding the observed behavior. Furthermore, lab experiments enable the researcher to examine different innovation types and effects of incentives and splitting up the innovation process to observe individual behavior at particular points of the process (Falk and Heckman 2009; Smith 1994, 2003).

In the following, we review examples of different fields of innovation research where lab experiments have been put forth to provide novel insights.

Review

By analyzing the effects of specific policy instruments via economic experiments, several of the advantages of lab experiments described above can be used fruitfully. In particular, it becomes possible to compare counterfactual data of decision situations with and without a particular instrument. Therefore, it is possible to analyze subjects’ specific reactions to changes in the framework conditions, which is almost impossible when using “real-world” data. There are additional merits to the controlled lab environment, in which only one factor is changed; for instance, innovation behavior and its development can be observed and analyzed over several periods. Of course, the innovation process is necessarily stylized in lab experiments; nevertheless, a number of promising ideas concerning how to transfer the innovation process into the laboratory have been provided in recent years. Table 1 comprises the experiments reviewed in the following chapters and summarizes in brief the particular task subjects had to solve.
Table 1

Overview on reviewed experiments

Field of research

Short title

Type of task

Subjects’ task in the experiment

Intellectual property rights

Buchanan and Wilson 2014

Real effort search task

Producing and trading rivalrous and non-rivalrous goods composed of colors

Meloso et al. 2009

Real effort search task

Solving the knapsack problem and trading the potential components

Buccafusco and Sprigman 2010

Creative task

Creating and trading poems

Crosetto 2010

Creative task

Creating and extending words and deciding whether to use IP protection

Brüggemann et al. 2015

Creative task

Creating and extending words, setting license fees

Financial instruments

Brüggemann and Meub 2015

Creative task

Creating and extending words, setting license fees

Brüggemann 2015

Creative task

Creating and extending words, setting license fees

Payment schemes

Eckartz et al. 2012

Real effort search task

Combining as many words as possible from 12 given letters

Ederer and Manso 2012

Real effort search task

Managing a virtual lemonade stand

Erat and Gneezy 2015

Creative task

Solving rebus puzzles

Bradler 2015

Creative task

Imagining unusual uses for items

R&D competition

Isaac and Reynolds 1988

Investment task

Taking investment choices under competition

Isaac and Reynolds 1992

Investment task

Taking investment choices including the game bingo

Sbriglia and Hey 1994

Search task

Finding a letter combination by buying different letter trails under competition

Zizzo 2002

Investment task

Competing for a prize over several periods

Silipo 2005

Investment task

Accumulating “knowledge units” under risk and competition

Cantner et al. 2009

Search task

Searching for product specifications of a car including investment and competition

Aghion et al. 2014

Investment task

Competing for finding an innovation including investment and risk

Intellectual property rights

For instance, there are several experiments implementing (real effort) search tasks to simulate the innovation process. Buchanan and Wilson (2014) design an experimental environment with subjects producing, trading, and consuming rivalrous and non-rivalrous goods. Rivalrous goods are produced out of two complements and can be sold. By contrast, producing non-rivalrous goods is possible by participating in a search task in order to find the “favorite good” of the specific period, which is more valuable than the rivalrous good and—in opposition to rivalrous goods—can be sold several times. The authors implement one treatment with intellectual property, in which selling and transferring the non-rivalrous good is restricted to the respective owner, as well as one treatment without intellectual property, where non-rivalrous goods can be created several times. The authors find no differences in the value of produced non-rivalrous goods and the average money earned regardless of intellectual property protection. Overall, Buchanan and Wilson suggest that intellectual property protection does not spur innovativeness. However, the protection only serves as an additional incentive, whereas the existence of entrepreneurial individuals is more important. The respective entrepreneurs subsequently profit substantially from the protection, as well as generating wealth without intellectual property protection.

Meloso et al. (2009) use another kind of search task—namely the knapsack problem—to simulate intellectual discovery in a patent and a non-patent market system, in which components of potential discoveries are traded. The goal of the knapsack problem is to combine inputs of a particular value and realize an optimal weighing of the components. In sum, the number of subjects who were able to find the correct solution to the knapsack task was higher in the markets system, which has the advantages that no scope of intellectual property rights has to be defined beforehand and that it entails no monopoly rights. Therefore, the authors state that markets do not necessarily fail—as theoretical contributions suggest—for non-excludable and non-rival goods.

Buccafusco and Sprigman (2010) let subjects write poems and implement a market for the poems. Depending on the initial distribution of intellectual property rights, they find different preferences of the innovators, owners, and buyers. There is a robust endowment effect that manifests itself in the high offers of innovators and a significantly lower willingness to pay among the buyers. This experiment has the advantage of simulating the innovation activity most closely on an individual level, yet it is not possible to further evaluate the particular poems and determine a ranking for the quality of the innovations.

Including further features of the innovation process—namely creativity, ownership, and investment choices—Crosetto (2010) developed a task to simulate innovative activity based upon the board game Scrabble. He uses his setting to analyze the individual behavior when subjects have to create and extend words and are able to select between the intellectual property schemes of open source and fixed license fees. He finds that subjects’ propensity to provide their innovations open source is more likely when the level of license fees is high. Brüggemann et al. (2015) extend this experimental setting to test for the effect of different regulatory incentive schemes on the individual innovativeness. They compare a treatment with the possibility to choose the amount of license fees to a system without license fees and further implement the ability to communicate. They find that communication does not change the innovative behavior and that welfare is higher in the no-license-fee system than in the license-fee system. However, when given the possibility to license innovations, subjects display a high demand for being rewarded monetarily rather than providing innovations to other participants free of charge.

Financial instruments

There is broad literature about the difficulties in analyzing the effect of subsidies and other public programs to foster innovativeness due to endogeneity and selection bias problems. Although the methods used have advanced substantially in past years, lab experiments can contribute to this sub-field of innovation research (Blundell and Costa Dias 2009). In some cases, experiments might be the only way to provide insights about new—and potentially costly—policy instruments before they are implemented in the “real world.” This approach might thus be a particularly promising methodological choice when new institutional framework conditions are tested, which aim at fostering the innovative activity. Nevertheless, there is only a limited number of studies dealing with financial instruments to date.

Using the Scrabble-based word creation task introduced by Crosetto (2010), Brüggemann and Meub (2015) analyze the individual behavior in two types of innovation contests by awarding subjects with a bonus for the best innovation in one treatment and for the largest innovation effort in another, comparing individual performance to a benchmark treatment without a prize. They find that the willingness to cooperate decreases when innovation contests are introduced, while the overall welfare remains constant across treatments. Furthermore, using the same word task, Brüggemann (2015) analyzes the effects of two distinct forms of subsidies on innovativeness: first, by supplying resources determined for innovative activities and, second, by providing additional financial resources not restricted to the use in innovative activities. She finds that both forms of subsidy lead to a crowding-out of private investment and negative welfare effects when the costs for the subsidy are included. Furthermore, subsidies fail to induce a positive effect on the individual innovation behavior.

Payment schemes

Another class of experiments focuses on the creative element of innovation and the effects of different payment schemes. Eckartz et al. (2012) test the effects of different payment schemes on creativity using a word-based real effort task, where subjects have to combine as many words as possible out of 12 prescribed letters within a certain time. They examine a flat fee, a linear payment, and a tournament and find no substantial differences between the three incentive schemes. Similarly analyzing different payment schemes, Ederer and Manso (2012) compare the innovative activity when offering a fixed wage, a wage based upon pay-for-performance, and a split wage, which is fixed at the beginning and based upon performance later on. In a search task, subjects have to manage a lemonade stand, whereby they have to decide upon several variables such as the location, content, and price to find the most profitable solution. The authors find that the split wage with tolerance for early failure and compensation for long-term success leads to more innovative effort and higher overall welfare.

Erat and Gneezy (2015) compare three payment schemes, namely a pay-for-performance scheme, a competitive scheme, and a benchmark without incentives. Unlike Ederer and Manso (2012), they use rebus puzzles as a creative task and find that competition reduces creativity and a pay-for-performance scheme does not change creativity in comparison to a situation without incentives. Comparing the two financial incentives, creativity is higher in a pay-for-performance scheme.

Bradler (2015) used the “unusual uses task”—an established creativity test—to compare accomplishment, self-reporting, and risk behavior. In the task, subjects have to imagine as many uses for a particular object as possible in a certain time, choosing their preferred payment scheme prior to the task, i.e., a tournament or a fixed payment. She finds that the different payment schemes appeal to different types of subjects: risk-loving subjects with a high self-assessment tend to choose the tournament; however, in contrast to previous studies, creative subjects do not tend to choose the tournament more often than the fixed payment.

R&D competition

Finally, in the experiments on R&D competition, the authors focus on different investment task to analyze the individual behavior in competitive and innovative environments. Experiments on patent races and R&D competition were first established by Isaac and Reynolds (1988) to simulate a one-stage stochastic invention model and subsequently a two-staged model (Isaac and Reynolds 1992). This class of experiments aims to test the findings of models with empirical evidence, whereby—in contrast to the experiments described before—they do not analyze specific policy instruments. Sbriglia and Hey (1994) develop a costly combinatorial task representing research competition for a patentable innovation to analyze three behavioral problems of patent races, namely how subjects select their search procedures, which investment strategies they use, and how information is processed. The authors identify different types of innovators: the “winners”, who search successfully, do not act randomly, and invest more in comparison to the “losers”, who are unable to establish a strategic search procedure. Furthermore, stronger competition accelerates the rate of investment, and with a higher number of periods, successful players more commonly adapt their searching behavior. Zizzo (2002) tests the multi-stage patent race model by Harris and Vickers (1987) with an investment task where subjects compete for a monetary prize over several periods. Their results disconfirm the theoretical assertions, as leaders of a patent race do not invest more than their followers. Furthermore, the authors find no virtual monopoly and investments do not change as predicted by the model. Silipo (2005) analyze the cooperation and break-up behavior in joint ventures in a dynamic patent race model theoretically and experimentally. In the model, they find that the starting positions of the competitors are crucial for being cooperative or not: if the innovators start at different points of the research process, the probability of joint ventures decreases, while in joint ventures, the pace of the process slows down. The results of their experiment correspond to the model, aside from some races in which subjects perform worse than anticipated.

Cantner et al. (2009) test a patent race model limited to a duopoly market without price competition by implementing a multi-dimensional search task with uncertainty. They find that different strategies solve the task, namely risky innovative investment and risk-free imitations. On average, subjects choose the risky innovative investment based upon the risk of an investment failure, their anticipated revenue, and their relative success in the experiment. Furthermore, the gap in subjects’ earnings has a positive impact on their investment in the next periods. Finally, Aghion et al. (2014) analyze the effects of competition on a step-by-step innovation by means of a risky investment task with different levels of competition and time horizons. The results show an increase in investment for neck-and-neck firms, yet a decrease in investment for firms lagging behind.

Conclusions

In this paper, we present the limitations and advantages of using laboratory experiments for innovation research and review 18 examples from four specific fields in which lab experiments already have been conducted. As the experimental method yields promising results in testing intellectual property rights, financial instruments, payment schemes, and R&D competition, we suggest that laboratory experiments can serve as a useful additional tool to innovation economists and represent a source of promising new insights for innovation research.

In particular, we argue that lab experiments should be used to target specific policy questions and thus provide measures for the effectiveness of specific instruments prior to their introduction. This approach has—in marked contrast to all other methods—the advantages of yielding evidence from counterfactual situations and a strong control of the setting, for example, when testing external incentives for innovative activity or changing parameters of the institutional framework. Therefore, we follow Chetty (2015) and Weimann (2015), who suggest a pragmatic perspective on behavioral economics, thus adding experimental evidence to the existing methods whenever its particular advantages outweigh its limitations. Within this pragmatic perspective on laboratory experiments, there are three ways in which this field of research can contribute to public policy: by presenting new policy instruments, developing better predictions regarding the effects of existing policies, and more accurately measuring welfare implications. Besides the policy implications, this strand of literature can be used to derive managerial implications. Particularly, studies on external incentives for fostering innovative activities are of relevance, since they give managers practical advice on how to best foster innovative activities of their employees, by using, e.g., experiments analyzing the optimal payment schemes for innovative activities.

We hope that this overview encourages other researchers to use lab experiments in innovation research, which could be further developed in several domains of innovation research: as the existing laboratory studies on financial instruments measure effectiveness, future studies might focus on measuring efficiency, which would reflect promising progress in evaluating new means of public policy. Furthermore, lab experiments might be helpful as a methodological starting point for developing new policy instruments. From a managerial perspective, future experimental innovation research might address the more comprehensive understanding of the innovation process itself. For example, experimental researchers might analyze innovative work in teams and thus decompose the innovation process into its components, which is effectively possible in a laboratory environment. Moreover, the role of external incentives to encourage employees’ innovativeness might be further emphasized.

Declarations

Acknowledgements

Financial support from the German Federal Ministry of Education and Research via the Hans-Böckler-Stiftung is gratefully acknowledged. Further, we would like to thank Till Proeger for his very helpful comments.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Faculty of Economic Sciences, Chair of Economic Policy and SME Research, University of Göttingen

References

  1. Aghion, P., Bechtold, S., Cassar, L., & Herz, H. (2014). The causal effects of competition on innovation: experimental evidence (National Bureau of Economic Research Working Paper (No. w19987)).View ArticleGoogle Scholar
  2. Angrist, J. D., & Pischke, J.-S. (2010). The credibility revolution in empirical economics: how better research design is taking the con out of econometrics. Journal of Economic Perspectives, 24(2), 3–30. doi:10.1257/jep.24.2.3 .View ArticleGoogle Scholar
  3. Barmettler, F., Fehr, E., & Zehnder, C. (2012). Big experimenter is watching you!: anonymity and prosocial behavior in the laboratory. Games and Economic Behavior, 75(1), 17–34. doi:10.1016/j.geb.2011.09.003 .View ArticleGoogle Scholar
  4. Bator, F. M. (1958). The anatomy of market failure. The Quarterly Journal of Economics, 72(3), 351–379. doi:10.2307/1882231 .View ArticleGoogle Scholar
  5. Blundell, R., & Costa Dias, M. (2009). Alternative approaches to evaluation in empirical microeconomics. The Journal of Human Resources, 44(3), 565–640. doi:10.3368/jhr.44.3.565 .View ArticleGoogle Scholar
  6. Boockmann, B., Buch, C. M., & Schnitzer, M. (2014). Evidenzbasierte Wirtschaftspolitik in Deutschland: Defizite und Potentiale. Perspektiven der Wirtschaftspolitik, 15(4), 307–232. doi:10.1515/pwp-2014-0024 .View ArticleGoogle Scholar
  7. Borrás, S., & Edquist, C. (2013). The choice of innovation policy instruments. Technological Forecasting and Social Change, 80(8), 1513–1522. doi:10.1016/j.techfore.2013.03.002 .View ArticleGoogle Scholar
  8. Bradler, C. (2015). How creative are you?: an experimental study on self-selection in a competitive incentive scheme for creative performance (ZEW - Centre for European Economic Research Discussion Paper (No. 15-021)).Google Scholar
  9. Brüggemann, J. (2015). The effectiveness of public subsidies for private innovations: an experimental approach (cege Discussion Paper (No. 266)).Google Scholar
  10. Brüggemann, J., & Meub, L. (2015). Experimental evidence on the effects of innovation contests (cege Discussion Paper (No. 251)).Google Scholar
  11. Brüggemann, J., Crosetto, P., Meub, L., & Bizer, K. (2015). Intellectual property rights hinder sequential innovation: experimental evidence (cege Discussion Paper (No. 227)).Google Scholar
  12. Buccafusco, C., & Sprigman, C. (2010). Valuing intellectual property: an experiment. Cornell Law Review, 96(1), 1–46.Google Scholar
  13. Buchanan, J. A., & Wilson, B. J. (2014). An experiment on protecting intellectual property. Experimental Economics, 17(4), 691–716. doi:10.1007/s10683-013-9390-8 .View ArticleGoogle Scholar
  14. Busom, I. (2000). An empirical evaluation of the effects of R&D subsidies. Economics of Innovation and New Technology, 9(2), 111–148. doi:10.1080/10438590000000006 .View ArticleGoogle Scholar
  15. Camerer, C. F., & Hogarth, R. M. (1999). The effects of financial incentives in experiments: a review and capital-labor-production framework. Journal of Risk and Uncertainty, 19(1-3), 7–42. doi:10.1023/A:1007850605129 .View ArticleGoogle Scholar
  16. Cantner, U., Güth, W., Nicklisch, A., & Weiland, T. (2009). Competition in product design: an experiment exploring innovation behavior. Metroeconomica, 60(4), 724–752. doi:10.1111/j.1467-999X.2009.04057.x .View ArticleGoogle Scholar
  17. Charness, G., & Fehr, E. (2015). From the lab to the real world. Science, 350(6260), 512–513. doi:10.1126/science.aad4343 .View ArticleGoogle Scholar
  18. Chatterji, A. K., Glaeser, E., & Kerr, W. (2013). Clusters of entrepreneurship and innovation (National Bureau of Economic Research Working Paper (No. w19013)).View ArticleGoogle Scholar
  19. Chetty, R. (2015). Behavioral economics and public policy: a pragmatic perspective. American Economic Review: Papers and Proceedings, 105(5), 1–33. doi:10.1257/aer.p20151108 .View ArticleGoogle Scholar
  20. COM(2014) 339. Research and innovation as sources of renewed growth. Google Scholar
  21. Cooper, D. J., Kagel, J. H., Lo, W., & Gu, Q. L. (1999). Gaming against managers in incentive systems: experimental results with Chinese students and Chinese managers. The American Economic Review, 89(4), 781–804. doi:10.1257/aer.89.4.781 .View ArticleGoogle Scholar
  22. Crosetto, P. (2010). To patent or not to patent: A pilot experiment on incentives to copyright in a sequential innovation setting. In P. J. Ågerfalk, C. Boldyreff, J. González-Barahona, G. Madey, & J. Noll (Eds.), IFIP advances in information and communication technology: Vol. 319. Open source software. New horizons. 6th International IFIP WG 2.13 Conference on Open Source Systems (pp. 53–72). Berlin: Springer.Google Scholar
  23. Eckartz, K., Kirchkamp, O., & Schunk, D. (2012). How do incentives affect creativity? (CESifo Working Paper No. 4049).Google Scholar
  24. Ederer, F., & Manso, G. (2012). Is pay-for-performance detrimental to innovation? Management Science, 59(7), 1496–1513. doi:10.1287/mnsc.1120.1683 .View ArticleGoogle Scholar
  25. Erat, S., & Gneezy, U. (2015). Incentives for creativity. Experimental Economics. doi:10.1007/s10683-015-9440-5 . first published online.Google Scholar
  26. Falck, O., Wiederhold, S., & Wößmann, L. (2013). Innovationspolitik muss auf überzeugender Evidenz basieren. ifo Schnelldienst, 66(5), 14–19.Google Scholar
  27. Falk, A., & Heckman, J. J. (2009). Lab experiments are a major source of knowledge in the social sciences. Science, 326(5952), 535–538. doi:10.1126/science.1168244 .View ArticleGoogle Scholar
  28. Harris, C., & Vickers, J. (1987). Racing with uncertainty. The Review of Economic Studies, 54(1), 1–21.View ArticleGoogle Scholar
  29. Herbst, D., & Mas, A. (2015). Peer effects on worker output in the laboratory generalize to the field. Science, 350(6260), 545–549. doi:10.1126/science.aaa7154 .View ArticleGoogle Scholar
  30. Isaac, R. M., & Reynolds, S. S. (1988). Appropriability and market structure in a stochastic invention model. The Quarterly Journal of Economics, 103(4), 647–671. doi:10.2307/1886068 .View ArticleGoogle Scholar
  31. Isaac, R. M., & Reynolds, S. S. (1992). Schumpeterian competition in experimental markets. Journal of Economic Behavior & Organization, 17(1), 59–100. doi:10.1016/0167-2681(92)90079-Q .View ArticleGoogle Scholar
  32. Levitt, S. D., & List, J. A. (2007). What do laboratory experiments measuring social preferences reveal about the real world? Journal of Economic Perspectives, 21(2), 153–174. doi:10.1257/jep.21.2.153 .View ArticleGoogle Scholar
  33. Levitt, S. D., & List, J. A. (2008). Homo economicus evolves. Science, 319(5865), 909–910. doi:10.1126/science.1153911 .View ArticleGoogle Scholar
  34. Madrian, B. C. (2014). Applying insights from behavioral economics to policy design. Annual Review of Economics, 6, 663–688. doi:10.1146/annurev-economics-080213-041033 .View ArticleGoogle Scholar
  35. Mazzucato, M., Cimoli, M., Dosi, G., Stiglitz, J. E., Landesmann, M. A., Pianta, M., Walz, R., Page, T.(2015). Which industrial policy does Europe need? Intereconomics, 50(3), 120–155. doi:10.1007/s10272-015-0535-1 .View ArticleGoogle Scholar
  36. Meloso, D., Copic, J., & Bossaerts, P. (2009). Promoting intellectual discovery: patents versus markets. Science, 323(5919), 1335–1339. doi:10.1126/science.1158624 .View ArticleGoogle Scholar
  37. Sbriglia, P., & Hey, J. D. (1994). Experiments in multi-stage R&D competition. Empirical Economics, 19(2), 291–316. doi:10.1007/BF01175876 .View ArticleGoogle Scholar
  38. Silipo, D. B. (2005). The evolution of cooperation in patent races: theory and experimental evidence. Journal of Economics, 85(1), 1–38. doi:10.1007/s00712-005-0115-0 .View ArticleGoogle Scholar
  39. Smith, V. L. (1994). Economics in the laboratory. Journal of Economic Perspectives, 8(1), 113–131. doi:10.1257/jep.8.1.113 .View ArticleGoogle Scholar
  40. Smith, V. L. (2003). Constructivist and ecological rationality in economics. The American Economic Review, 93(3), 465–508. doi:10.1257/000282803322156954 .View ArticleGoogle Scholar
  41. Sørensen, F., Mattson, J., & Sundbo, J. (2010). Experimental methods in innovation research. Research Policy, 39(3), 313–323. doi:10.1016/j.respol.2010.01.006 .View ArticleGoogle Scholar
  42. Thomä, J., & Bizer, K. (2013). To protect or not to protect?: modes of appropriability in the small enterprise sector. Research Policy, 42(1), 35–49. doi:10.1016/j.respol.2012.04.019 .View ArticleGoogle Scholar
  43. Vedung, E. (1998). Policy instruments: Typologies and theories. In M.-L. Bemelmans-Videc, R. C. Rist, & E. Vedung (Eds.), Carrots, sticks and sermons. Policy instruments and their evaluation (pp. 21–58). New Brunswick: Transaction Publishers.Google Scholar
  44. Weimann, J. (2015). Die Rolle von Verhaltensökonomik und experimenteller Forschung in Wirtschaftswissenschaft und Politikberatung. Perspektiven der Wirtschaftspolitik, 16(3), 231–252. doi:10.1515/pwp-2015-0017 .View ArticleGoogle Scholar
  45. Zizzo, D. J. (2002). Racing with uncertainty: a patent race experiment. International Journal of Industrial Organization, 20(6), 877–902. doi:10.1016/S0167-7187(01)00087-X .View ArticleGoogle Scholar
  46. Zúñiga-Vicente, J. Á., Alonso-Borrego, C., Forcadell, F. J., & Galán, J. I. (2014). Assessing the effect of public subsidies on firm R&D investment: a survey. Journal of Economic Surveys, 28(1), 36–67. doi:10.1111/j.1467-6419.2012.00738.x .View ArticleGoogle Scholar

Copyright

© The Author(s). 2016