Although the remedies examined above are all very significant measures in combating imitation, there are a few impediments that stifle their effectiveness and ultimately, erode the remedies’ ability to tackle this problem on a global level. First of all, successful cooperative efforts on an international scale are almost always the exclusive purview of the OECD countries, as opposition remains from the developing countries for which the counterfeiting industry is still a major economic source (Bush et al, 2001). Legal remedies also encounter major difficulties as government action against imitators becomes warranted only by the presentation of solid, indelible evidence of illicit practices or of explicit links to criminals.This paper contends that the main quandary with all these legal and cooperative remedies resides in their ‘biased engineering’, which almost seems to have been tailored for the West only. The crux of the problem lies in that these remedies operate under the Western assumption that ‘imitation is bad’ and as such needs to be tackled, a belief whose tenability is not warranted in other parts of the world where other cultural influences have shaped different attitudes on the imitation issue.China: A Cultural Impasse? Of particular interest for this paper is the South East Asian region, and more precisely, the People’s Republic of China (PRC), the world’s largest imitator nation.

In order to understand why imitation practices have flourished to this extent in the PRC, one must comprehend the workings of the Chinese legal system and how it was forged by the cultural currents that have reigned supreme in the nation for centuries. This is necessary because each culture produces its own attitudes towards dispute resolution, which are ultimately captured in the formulation of its own legal system (O’Connor & Lowe, 1996).The Chinese legal system has primarily been shaped by Confucianism and Legalism, two powerful philosophical currents that protected the interests of the state and society as a whole, as opposed to the individual. As a result, no independent system of administering and enforcing the law was established and the emphasis on harmony and self-governance gave rise to an aversion to adversarial conflicts and public disruptions, which is in stark contrast to the more adversarial and confrontational legal system in the West (O’Connor ; Lowe, 1996).Much of China’s reluctance to forfeit its imitation habits today can be traced back explicitly to Confucianism, which considers that learning takes place through copying and that imitation is a form of flattery (O’Connor ; Lowe, 1996). Given that the concept of IPR legislation has always been at odds with the teachings of Confucianism, it isn’t difficult to see why imitation is such a widespread practice in China.

Although IPR legislation is becoming more enforced each year, the philosophies of Confucianism and legalism, which provide ‘legitimacy’ to imitation practices, are still very well imbued in the Chinese society. Since China will, in all likelihood, not give up its culturally-anchored imitation habits any time soon, the need for radically different remedies that are of transcultural validity, i.e. whose successful implementation is not susceptible to cultural divergences, becomes apparent.THE PROPOSED/ALTERNATIVE SOLUTIONOn the basis of all these considerations, this research paper challenges the commonly held view that imitation can be most effectively contained vis ï¿½ vis cooperative/legal actions or even the joint working of both government and business actors (Bush et al, 2001; Chaudry et al, 2005). Against this mainstream view, this paper presents a more ‘anarchic’ perspective of research in this domain, as it claims that imitation practices can be most effectively restrained by the business entities alone.By examining causal ambiguity (Lippman & Rumelt, 1982; DeFilippi & Reed, 1990) and complexity theory (Rivkin, 2000)-two prominent, allegedly ‘opposing’ explanatory frameworks within the deterrence imitation field-and identifying the conceptual commonalities between them, this paper contends that producers alone harness the potential to best curb imitation of their strategies by increasing the level of complexity of their business decisions.

The Role of Complexity & Causal Ambiguity Theories Although in the literature there seems to be a clear demarcation between complexity and causal ambiguity theory-whereby the latter is considered part of the ‘classical, resource-based deterrents to imitation’ school of thought (Rivkin, 2000) and the former a radically different approach-the two theories, far from belonging to ‘opposite camps’, are conceptually related. After highlighting the significant level of conceptual similarities between the two theoretical frameworks, as well as the discrepancies, this paper suggests that the two theories complement each other on several important aspects, and in doing so, provide a more comprehensive account of the issue than what each theory can offer individually.Despite these complementarities though, it is argued that causal ambiguity theory itself is fundamentally ‘rooted’ in complexity.

This striking convergence, coupled with the reputation of these two supposedly ‘adversary’ theories in the field under scrutiny, lends credibility to the assertion that complexity is the primary, core deterrent to imitation. Complexity Theory As Rivkin (2000) notes, successful business actors seek to deter imitation and empirical evidence shows that some do so successfully.Particularly striking is the ability of some firms to defy imitation despite the fact that the ‘ingredients’ of their strategies have been revealed in an extensive array of sources, ranging from analyst reports to books written by founding executives (Rivkin, 2000). The fact that strategies have been open to public scrutiny for quite some time suggests that the ‘mere’ knowledge of the ingredients is not a sufficient prerequisite for successful imitation. But if lack of knowledge of ingredients is not an impediment to imitation, then what prevents the replication of these successful strategies? Complexity theory provides an answer to this intriguing inquiry.

Proponents of complexity theory dismiss the classical, resourced based barriers to imitation, such as economies of scale and scope, causal ambiguity, tacit knowledge, first mover’s advantage and game theoretic models that make imitation unrewarding, etc. (e.g. Barney, 1991; Lippman ; Rumelt, 1982). The central claim here is that ‘the sheer complexity of a strategy can raise a barrier to imitation’ (Rivkin, 2000, p.

825). According to Simon (1962), strategy complexity is defined by two factors: the number of decisions that comprise a strategy and the level of interaction between them. Complexity increases as the number of decisions in a business strategy become more numerous and interdependent, rendering imitation very difficult and beyond a certain threshold, virtually unfeasible (Rivkin, 2000).This statement encapsulates a very interesting implication, namely that a firm can deter imitation by doing many related things even though none of its individual actions is inimitable (Rivkin, 2000). As a result, a prospective imitator, even if he were perfectly knowledgeable of all the ‘ingredients’ that make up a successful business strategy, would still fail in replicating its ‘recipe’.In addition to its commonsense plausibility, complexity theory enjoys a lot of support from theoretical precedents in other disciplines, in particular on research on loosely and tightly coupled systems in biology and organizations (Levinthal, 1997, as cited in Rivkin, 2000), as well as ample empirical studies. In order to understand why the number of decisions involved in a strategy and the degree of interactions among them can pose a formidable barrier to imitation, and indeed, render imitation ‘intractable’, Rivkin (2000) proposes a NK algorithmic model, similar to the one developed by evolutionary biologist Kauffman, that parameterizes the two aforementioned aspects of complexity: number of decisions (N) and their number of interactions (K).

The model allows for the tuning of the degree of complexity and the analysis of how an increase in the latter affects the ease of imitation.According to the theory, aspiring imitators who are invidious of their rival’s leadership status are confronted with a ‘problem’, namely ‘solving’ for the firm’s winning business strategy. The ease of solution will of course be contingent on the level of difficulty of the problem, i.e. on the complexity of the benchmark strategy itself. The main idea behind the model is that interactions among decisions in the ‘winning’ strategy render the firm’s decision problem NP-complete, i.e.

intractable to algorithmic solution.In other words, when interactions are pervasive, no general, step by step procedure exists that can allow the imitator to locate the business’s globally optimal strategy quickly (Rivkin, 2000). As a result, imitator managers, who realize that complexity renders global optimization unfeasible, resort to judgment and heuristics to ‘approximately’ locate the global optimum peak strategy.

However, as Rivkin (2000) shows, complexity severely hampers not only global optimization, but also the heuristics that imitating firms ‘plausibly’ employ in attaining sub optimal outcomes.In order to formally understand how and why complexity generates this shield of deterrence, Rivkin’s (2000) model is examined in detail. Understanding the ‘Tunable Complexity’ Model (Rivkin, 2000) Rivkin’s model is constructed using two parameters N and K. N is simply the number of decisions that a firm faces. For the sake of convenience, in Rivkin’s model each of N decisions can be either 0 or 1, i.

e. is restricted to a binary choice. As a result, there are 2N possible configurations of decisions. K measures the degree to which the decisions interact, i.e. the number of decisions that are interrelated.

K ranges from 0 to N-1. When K=0, the contribution of each decision to overall firm value depends exclusively on the choice made concerning that decision. As K gets larger, the contribution of each decision is affected by the others. K captures the fact that the choice made concerning one decision may affect the marginal benefit or cost associated with another decision.As Rivkin notes, although very simplistic, the model captures an important aspect of reality, as the inclusion of the variable N in the model reflects the fact that executives are not searching for the global optimum solution along a single dimension, but rather within a very high dimensional ‘decision space’ for an optimal combination of choices. (See Appendix, section 2 and 3 for an in depth treatment of how the model and the simulations were constructed).

The Landscape ‘Metaphor’ (Rivkin, 2000)To better understand why complexity hampers imitation, Rivkin proposes to conceive of the decision problem at hand as a ‘landscape’ in which each of N decisions constitutes a ‘horizontal’ axis in a high dimensional space. Each combination of choices results in a certain value for the firm, and this value is then plotted on the vertical axis. The chief goal of management, to maximize the firm’s value by creating competitive advantages vis vis its competitors, here implicates that decision makers will screen their action possibilities and select the configuration of decisions that yields the global optimum strategy, i.e. the highest possible peak value on this landscape.Imitators seek to locate this peak by attempting to identify its strategic ‘coordinates'(component decisions), but the kernel of the theory is that as K grows, i.e.

as the pieces of a strategy become more interdependent, the landscape corresponding to a firm’s decision problem becomes increasingly ‘rugged’ and multi-peaked, making the search for the global optimum strategy considerably more difficult and beyond a certain point, intractable (See Appendix, section 5)In fact, as K grows, the correlation between the altitudes of adjacent points on the landscape- points that differ by how one decision is made-declines dramatically, and local peaks proliferate. When K=0, choices make independent contributions to a firm’s value. In this situation, alteration of a single decision changes the contribution of that decision alone. Adjacent locations on the terrain of this landscape, i.e.

configurations that differ by how a single decision is made, do not differ in altitude by more than 1/N, so the landscape is relatively smooth. On such a ‘landscape’, firms can scale the global peak via a series of incremental, single-decision moves. Hence, when K=0, the decision problem is characterized by a ‘smooth’ landscape with a single peak that can eventually be attained.In contrast, when K=N-1, every decision influences the contribution of every other choice. Adjacent locations on such a terrain have levels of value (elevations) that are uncorrelated with one another, resulting in a fully random landscape with many local peaks. As K rises, the landscape grows rugged and the altitude at any step becomes a poor predictor of the elevation at the next step. In such an environment, global optimization becomes very difficult and indeed, intractable.

Incremental improvement is also highly ineffective on such a landscape. A firm therefore may attempt to approximate the benchmark strategy by undertaking a major, wholesale reconfiguration that might take it to a much higher level, but the factor K makes such an endeavor very risky, as long jumps on rugged terrain, or a simple miscalculation on any one of the many dimensions can cause a firm to land in a trough instead of atop the intended peak. 3The landscape metaphor alone portrays fairly well the difficulties that imitators face as a result of complexity, but it still doesn’t explain why imitation becomes intractable to algorithmic solution. A mathematical explanation is required for this purpose. Complexity cannot deter imitation unless it first makes global optimization difficult. Suppose that the decision problems posed by the model are easily solved by some algorithm, regardless of N or K. Complexity alone then would pose no obstacle to imitation.The central claim here is that when decisions are numerous and highly interdependent, a firm’s strategy becomes a decision problem of such complexity that it defies algorithmic solution and becomes intractable in the technical sense of the word.

The theory of NP-Completeness In algorithm theory, the tractability of a particular problem hinges on its time complexity function, which relates the size of a specific instance of a problem (n) to the time that a given algorithm requires to solve the problem, t(n).4An important distinction that needs to be made here is that between polynomial time algorithms, for which t is some polynomial function of n, and exponential time algorithms, for which t is an exponential function of n. This distinction is crucial because exponential functions grow far faster than polynomial functions. Algorithms with exponential time complexity become impractical for large instances while algorithms with polynomial time complexity remain feasible, i.e.

solvable.The latter are said to fall into the P class and are considered by algorithm designers to be well solvable. Problems for which no polynomial time algorithms exist are labeled ‘intractable’. Such problems fall into the nondeterministic polynomial class (NP). The NP-complete class of problems is a subset of NP class problems which have been shown to be as hard as any in the NP class.

It is widely believed, though yet not proven, that the NP complete problems are all intractable. In all likelihood, no general, step-by-step procedure can be devised to solve an NP complete problem of large size in reasonable time.