Complexity and Organization Management

Robert R. Maxfield

In recent years there has emerged a collection of interdisciplinary scientific efforts known as the science of complex systems, stimulated by the pioneering efforts of the Santa Fe Institute. Complex systems, which consist of many interacting entities and exhibit properties such as self-organization, evolution, and constant novelty, exist in all the domains of our world—physical systems, biological systems, human social systems—and are very difficult to comprehend by the standard reductionist analytic approach of modern science. The science of complex systems attempts to discover general laws governing such systems by bringing together people and ideas from many disciplines. As yet such general laws have not been found —indeed completely general laws may not exist—but the efforts have yielded much deeper insights into the systems studied. The scientifically significant results are so far mostly in the physical and biological domain, but the metaphors have proven to have tremendous appeal and utility in studying humans and human social systems.

The basis of the appeal of complex systems metaphors in thinking about our human world is not hard to find. We live in a time of rapid, unpredictable, and novel change; the manner of the demise of communism is an example that captures the essence of unpredictable change. For those of us with responsibility for effectively managing organizations, whether in the private or public sector, the instabilities in our present world call into question most of the conventional wisdoms about management.

My purpose in this paper is to propose that complex system metaphors provide a valuable intellectual framework for thinking about our human world and managing the organizations which comprise it. My perspective is as a practitioner of management in the high-tech industry, arguably the industry that has undergone the most rapid and fundamental changes over the last 40 years. I will try to impart some of the insights I have gained by applying the framework of complex system metaphors to my experience over more than 25 years. Although insights gained from the high-tech part of the private sector may appear at first glance appear to have no applicability to other domains such as the military or foreign policy, I believe at the proper level of abstraction all human organizations and institutions have much in common.

Since the study of complex systems is a recent development, most of us were trained in other fields, and when solving problems we apply the "arbitrary" component that Thomas Kuhn [1] refers to in his seminal work on scientific revolutions. For example, those of us who approach the world from a systems engineering perspective bring to it a background rich in mathematics, system theory, linear systems analysis and control theory, as well as a knowledge of decision analysis and game theory. Needless to say, with this kind of background, one tends to look at problems in a "systematic" way, trying to identify relevant and controllable variables, to decompose the problem into manageable parts, and to formulate the problem in terms of the solutions tools and approaches that are our stock in trade.

Sooner or later we come across a problem or set of problems that is not tractable by applying the "standard" approaches and tools that came with our selected profession. For me this happened sooner rather than later. In 1969, shortly after I completed my engineering doctorate, in which I emphasized systems theory, I succumbed to Silicon Valley fever (though the term "Silicon Valley" had not yet been coined) and co-founded a computer company, ROLM Corporation, with three other colleagues, all with similar backgrounds which included almost no management experience or business education.

Eager to bring the tools of my profession to bear, I initially tended to apply my systems training to managing an organization. Need to make a decision? Apply decision analysis; define all the possible consequences of all the possible actions one might take, then assign probabilities and value functions to these various outcomes, then compute expected values and ascertain the "optimal" decision. Worried about competition? Apply game theory. It did not take long to realize that this "engineering" approach to problem solving was unsuited for the rapidly changing environment which I was in. Fortunately, my partners had sufficiently different perspectives and skills that as a group we were able, with plenty of trial and error, to manage and grow a human organization operating in a rapidly changing external environment.

Over the next twenty years, the company successfully grew to over 10,000 employees, but I never really felt comfortable with many aspects of organizational management. I acquired a set of skills, tools, and techniques that tended to work, but I had no overall intellectual framework, or mental model, for thinking about the world in which I was embedded. Several years ago, pursuing an interest in economics, I became aware of the Santa Fe Institute through one of its first publications, The Economy as an Evolving Complex System [2], and was introduced to the emerging field of the scientific study of complex adaptive systems. I became convinced that a complex systems approach could provide the unifying intellectual framework for thinking about the world of high-tech management. Just as a complex systems approach could show why current economic theories of equilibrium, perfect rationality, and decreasing returns are incapable of understanding or explaining the 20th century global economy, it could also show why decision analysis and game theory are inadequate to explain or prescribe the behavior of firms.

In this paper, my objective is show how complex system metaphors can be used to gain perspective on the world of high technology, and to suggest some implications for management of any organization operating in a context of rapid change. I will first review the four major properties of complex adaptive systems and relate them to the high-tech world, then suggest by a "Darwinian selection" argument that there are some common attributes of successful organizations. Finally I will discuss strategic planning in complex environments.

Properties of Complex Adaptive Systems

By a complex adaptive system, or CAS, I mean an open-ended system of many heterogeneous agents who interact non-linearly over time with each other and their environment and who are capable of adapting their behavior based on experience. Open-ended means there is essentially limitless possibility for variability in agent characteristics and behavior. In non-human biological CASs, the source of agent variability is primarily genetic with inheritance; in human CASs the primary source of variability in behavior is the immeasurably large cognitive ability of the human brain. There are four major properties of the aggregate dynamics of CAS that set them apart from other systems: self-organization, evolutionary trajectories, co-evolution, and punctuated equilibrium. All of these properties are emergent, in the sense that complete knowledge of the individual agents is not sufficient to infer the details or timing of the aggregate properties. Professor Rosenau, elsewhere in this volume [3], has eloquently described these properties; I will briefly recap them and use them as a lens through which to view the world of high-technology.

Self-organization is the emergence of new entities or stable aggregate patterns of organization and behavior arising from the interactions of agents. Each higher level of organization has its own time-scale, and each new level has new kinds relationships and properties. That is, a complex adaptive system on one level is made up of lower level complex adaptive systems interacting and creating the higher level order. In human systems we usually take the lowest organizational level as the individual, although each individual could be considered to be comprised of lower level CAS, such as our brains and immune systems. Human CAS have several characteristics which distinguish them from other classes of CAS such as physical or biological systems. First, we have more levels of organization. The next level up from the human individual is family, clan, firm, etc. Going on up, we have on the economic side industries, regional economies, the global economy; on the governing side we have cities, states, nations. So there are multiple levels of nested complex adaptive systems in which humans operate individually and collectively. Second, every individual is usually a member of several higher level entities—family, employer, profession, church, city, etc. So self-organization is not strictly nested; complex webs of interconnections between human CAS exist at all levels. Third, the higher level (other than family) human organizations are social constructions as opposed to natural constructions. That is, the entity types are creations of our collective imagination to which we attach names, such as firm, industry, and economy. And the rules that determine the interactions between these entities are also socially constructed and are not fixed laws of nature.

Evolutionary trajectories means the future history of a given system from a given point in time can not be determined by complete knowledge of the present state, and if you "re-run the tape" many times, every trajectory will most likely be unique. In particular, "historical accidents"—he occurrence of certain a priori very low probability events—can dramatically change the outcome (e.g., Hitler’s accession to power). However, in human systems as in simpler biological systems, the prerequisites for Darwinian natural selection are met—mechanisms for the creation of novel entities, limits to population of entities, differential entity survival based on relative fitness, and heritability of attributes—which ensures that in any given trajectory, we expect to see emergence of order in human systems analogous to the emergence of species and ecologies in nature.

Co-evolution takes the basic concept of Darwinian evolution to the next level. Instead of having a stable environment to determine fitness as agents adapt and evolve, a large part of each agent’s perceived environment consists of interactions with other agents, who are themselves adapting and evolving. And each agent interacts not only with other agents at the same level in the organizational hierarchy, as when firms compete in an industry, but also with agents at higher and lower hierarchical levels, such as firms’ relations with employees or the tax policies of the government. I believe that in thinking about human CASs, it is highly useful to include our artifacts—the inanimate things we create and make—as well as our organizations. In the term artifacts I include not only tools and products but information and knowledge. Our artifacts exhibit, in a more limited way, the properties of complex adaptive systems, in that they evolve (from the abacus to the personal computer), they co-evolve (weapon systems), and they exhibit increasing levels of organization (LANs to the Internet). And because our human organizations are largely organized around making and using artifacts, we really should view our human agents as co-evolving with the artifacts we create. The behavior of a particular agent depends, to a large degree, on the artifacts at its disposal. If, for example, a country has created a new weapon, its army will evolve to take advantage of the unique capabilities this new weapon offers. Further, if you are facing an army that has a different set of weapons, both the weapons you have and those that they have certainly matter, in terms of how you expect them to behave and how you are going to behave. Recently, the combination of two types of artifacts, weapons and computers, into a new type, smart weapons, has had an enormous impact upon defense systems at many levels.

Punctuated equilibrium is the tendency of a CAS to have stable patterns of activity for long periods of time, then have a short transition period of very rapid change in patterns, followed by new stable patterns of activity. In open-ended complex adaptive systems, it is usually impossible to predict when transitions will occur or what the resulting stable patterns will be. In our multi-level global human CAS, call it the human world, this phenomena occurs at all levels, and the question of stability versus instability depends on which part of the system you are looking at, what kind of patterns you are looking for, and what time scale you are using. For example, macro-economists studying the U.S. economy would say that since the 1940s the U.S. GNP has grown fairly smoothly over time, with a few blips here and there, and conclude the U.S. economy is in an equilibrium state, and liken it to a finely tuned, smooth-running engine of production. But if one drops down to the level of the firm, one sees thousands of firms going out of business every year, and new ones forming all the time, hardly an equilibrium state.

The High-tech Sector

If we take a centuries-long view of our human world, it is easy to see patterns of punctuated equilibrium. In the words of Peter Drucker,

"every few hundred years in Western history there occurs a sharp transformation [in which] society rearranges itself—its world view; its basic values; its social and political structure; its arts; its key institutions. Fifty years later, there is a new world. And the people born then cannot even imagine the world in which their grandparents lived and into which their own parents were born." [4]

Most of the major transition periods coincide with the emergence of new classes of artifacts around which we reorganize ourselves—Gutenberg’s printing press in 1455 driving the Renaissance, Watt’s perfected steam engine in 1776 initiating the Industrial Revolution.

Unquestionably, the development of the digital computer in the 1940s, followed by the invention of the transistor about 1950 enabling the economic implementation of the computer, has spurred a new major transition phase for humanity, which many call the Digital Revolution. Over the past four decades we have seen many generations of evolution of new classes of artifacts enabling blindingly fast computation, unlimited information storage, instantaneous communication over vast distances. These capabilities are driving rapid changes in all aspects of our human world, but nowhere is the pace of change more rapid in the newly created sector of the economy— call it the high-tech sector—comprising those firms directly involved in the creation of the artifacts themselves—computer and software companies, telecommunication companies, semiconductor manufacturers, etc.

If we take as our frame of reference the complex adaptive system consisting in the agents and artifacts of the high-tech sector, we can see an incredibly rich display of all of the properties of complex adaptive systems played out over 40-plus years. As an example of self-organization, we see first the emergence of the computer industry, followed by the software industry, followed by the data communication industry, each with its own identity, trade associations, trade shows, market research firms. Darwinian evolution is evident in the birth of new firms, typically inheriting "genes" of practices and cultures from older firms from which the entrepreneurs spun out, with survival of the fittest. Co-evolution is evident in the competition between firms leading to specialization into protectable niches, and in the entwined history of processor architectures, operating systems, programming languages, and networks. A good example of punctuated equilibrium is the "computer industry." First there was the era of the mainframe computer—room sized, costing millions of dollars. After an initial shakeout period in the ‘50s, there emerged stable market shares split among eight companies, with IBM holding over 70% of the total market. In the late ‘60s a new variant of artifact appeared, the minicomputer, costing tens of thousands of dollars. This initiated a dramatic increase in the total computer market, and the minicomputer segment became a very sizable fraction. A plethora of new companies, in addition to the existing mainframe companies, vied for market share, but within a few years the minicomputer segment stabilized, dominated by four companies: IBM and three newcomers. Then in the late ‘70s, yet another "species" emerged, the personal computer, costing a few thousand dollars. Again a spate of new companies emerged to compete in a vastly expanded market, in addition to existing ones, and after a few years stability again set in with a handful of companies dominating the market, all new except IBM.

The Economist notes "twenty-five years ago only about 50,000 computers existed in the whole world; [today there are] an estimated 140 million...and that does not count the embedded processors inside cars, washing machines or even talking greetings cards. A typical car today has more computer processing power than the first lunar landing-craft had in 1969." [5] No matter what metric you choose—mips per processor chip, bits per memory chip, cost per mip, cost per byte of memory, transistors per chip—performance has increased by a factor of 100 every 10 years for the past three decades (Moore’s Law). There is no reason to believe that, at least for the next two decades, these trends will change.

Characteristics of Successful High-tech Organizations

Clearly, organizations that survive and prosper in the high-tech sector must deal successfully with rapid change, not only in the artifacts with which they are associated, but in the agents with whom they compete and interact. Of the many thousands of new and existing firms that have attempted to compete in the high-tech sector over the last four decades, relatively few have succeeded. If we are searching for insights into managing organizations in rapidly changing environments, it would seem reasonable to look at these successful firms to see if they have traits in common - attitudes, management processes, organization forms, etc. Darwin’s principle of natural selection would imply those traits or characteristics that confer the best fitness will tend to spread through the population, either by "inheritance" through spin-outs, or imitation by others of successful role models.1 [Admittedly we are dealing here with a relatively few generations compared to biological evolution, so my argument should be considered as suggestive rather than scientifically valid.] Are there such common traits? I believe there are, and I will try to summarize them here.

There are two key principles that high-tech organizations understand at a visceral level. The first is to recognize that time is the scarce commodity. An organization has to be able to match the rate of change in its environment. If it cannot, it does not matter what resources the organization has in terms of money, people, intellectual capital, goodwill, or any other resource. An organization that cannot keep pace will inevitably fall farther and farther behind; having large resources will only prolong the death spiral. One metric crucial to many companies is the length of the product development cycle - the time between successive generations or major versions of a product. Thirty years ago, five years was an acceptable cycle. Twenty years ago an upper bound was three years. Ten years ago the best companies were shooting for less than two years. Now, a new buzzword in Silicon Valley is "Internet time" [6], with product cycles measured in months.

The second key principle is to recognize that people are the key asset of any organization. Why? Because people are the adaptive element of organizations. Learning and innovation come only from human cognition. Perhaps someday computers will exhibit true artificial intelligence, but that is a long time away, if ever. Humans are great at pattern recognition, great in making sense of "messy" situations, great at learning and adapting. The critical management task is to enable employees to most effectively use these capabilities to learn and adapt for the benefit of the corporation. High-tech companies have always been the leaders in attitudes, cultures, and policies to keep their employees motivated, happy, and productive. Few successful high-tech companies are unionized; a successful union organizing effort would be considered a catastrophic management failure. Unions create adversarial relationships among classes of employees that deteriorate the potential for collective learning and adaptation. If a significant number of employees feel they are not getting a fair shake and need a union to "fight" for them, management has failed.

If an organization takes to heart the two principles concerning time and people, what else needs to be done to ensure that an organization can adapt in a dynamic, complex environment? I find it useful to break this down into two questions: how can an organization allow adaptation, and then how can an organization encourage adaptation. Let’s consider each of these.

Although humans are the adaptive element in every organization, it does not follow that any organization will be adaptive. In fact, there is a deeply embedded metaphor in our society that works strongly against adaptable organizations—the metaphor of the organization as a machine. The metaphor grew naturally out of the last great social paradigm shift, the Industrial Revolution, in which science based on Newtonian physics led to the development of machines that replaced humans and animals as sources of energy for creating and transforming artifacts. A machine is a system of carefully designed parts interconnected in a precise way to accomplish a function repeatedly and reliably. The key to a machine is that each part has a known, predictable behavior in the system, and that the interconnection of the parts results in the result for which the system is designed. If one makes an analogy to human organizations, in which human beings are the component parts, there is the immediate problem that human behavior can be quite unpredictable. The answer to this, inspired by the work of Frederick Taylor [4] early this century, is to analytically determine the one best way to do each task, then train people to do it this way, and insist on reliable conformity—standard operating procedures. In a similar fashion, the interaction of the human components of the organization is carefully defined—who communicates with whom about what, who has responsibility for what. Since variability in results is to be avoided, authority to permit deviations from standard procedures is invested in only a few key individuals. We are all familiar with the end result of applying the machine metaphor—organizations that have precisely defined organization charts with many hierarchical levels, volumes of procedures defining most activities of the organization, and most major decision-making vested in a few central individuals at the end of long chains of authority. Staff organizations, mostly isolated from direct contact with the external environment, spend endless hours (aided by the writings of business school organization theorists) worrying about the "best" way to organize people into functional blocks, how these blocks should relate and communicate, designing "optimal" work flows and methodologies (aided by systems and operations research theorists). By their very design, such organizations do not allow for rapid adaptivity and innovation in response to external change. What capabilities they do have for change are vested in a very few people, rather than harnessing the cognitive capabilities of every member of the organization.

Suppose that, rather than using the machine analogy, we use instead the complex adaptive systems metaphor in thinking about organization structure and design, and view our organization as one CAS made up of many other CASs, namely the human members, and attempting to survive in an environment of many other CASs, with whom we must both cooperate and compete. Then, by the properties of such systems, we know there will be an inherent tendency for self-organization among employees, that continual evolution (read change) will be required in all aspects of our activities, that our external environment is not static but co-evolving with us, and that we can expect periods of very rapid change interspersed with periods of slower change. How then should we design our organization? Pretty clearly, it should be the antithesis of the machine-derived model. It should feature few rigid operating procedures, it should have great flexibility in organization structure, it should have widely delegated decision authority with short authority chains, and it should be very sensitive to changes in its external environment. These are indeed the features of successful high-tech organizations; in fact, I submit that these features now characterize almost all high-tech organizations as a result of Darwinian selection over many generations of evolution.

Suppose we were to study the organizational structures of two large companies, the first being an old-line type such as General Motors 30 years ago (before the Japanese ate their lunch), and the second a large high-tech company such as Intel or Microsoft. Both would have organization charts we could study, and on the surface they would appear similar, a hierarchical tree of sub-organizational blocks, although the high-tech chart would probably be much flatter. There would likely be an attached commentary describing the basic activities and responsibilities of each component sub-organization, together with an overview of how the components relate to each other. If we went to the managers of the components of the top-level chart and asked how their part of the organization is organized, they would produce similar structures. At this superficial level, we might conclude there are no real differences in the organizational structure of the two companies. But if we dug to a deeper level of understanding, we would find profound differences. If we asked to see the company procedures manual, the old-line company would likely produce a multi-volume set, and advise us that each component organization would have their own additional volumes. In the high-tech company we would be given a very slim volume that contained very few procedures ("you will do it exactly this way"), but instead mostly policies ("here are some overall constraints on the actions you can take") and guidelines ("here are some suggested ways to do it which usually work, but you are free to find a better way"). There would be a discussion of the company’s mission and a discussion of the values that are expected to guide the behavior of all employees. High-tech companies would consider it counter-productive to have highly detailed procedures for action and interaction; rather, they recognize that the formal organizational structure is just a guide for the kinds of relationships and interactions that need to develop for success, and that it is crucial to allow employees the maximum possible latitude for action. If we examined in depth the range of decisions managers at each hierarchical level could take without prior approval from a higher level or from peer levels, we would find it quite restricted for the old-line company but quite broad for the high-tech company, so decision making is broadly decentralized.

Rather than relying on a detailed formal organization structure to channel all activities and interactions, high-tech companies rely instead on the informal organization, the self-organizing networks of relationships that arise naturally from purposeful collective activity, and on temporary organizations, such as teams and task forces, for fast response to change. The informal organization contains collective wisdom about who has what skills and how best to solve problems. Further, it is fluid and adaptable. As conditions change, the informal organization rapidly deletes, modifies, and adds to the patterns of interactions in order to rapidly adjust to the situation. When a situation arises which strains the abilities of both the formal and informal organizations, rather than obsess about how to optimize the formal organization chart to deal with it, the best resources for dealing with it are marshaled from throughout the company, usually selected via the informal organization, and a temporary organization is created and endowed with appropriate authority, to determine and execute the appropriate response. Sometimes, after the organization has responded to some challenge through temporary organizational action, there emerges a realization that a modification to the formal organization chart is appropriate for the changed context, but note that this happens after the learning has occurred, not before.

Temporary organizations are not necessarily just ad hoc. Very often they are routinely used for recurring activities such as teams for product development projects. Each time a new project is started, a team is named with representatives from each relevant formal organization component, and the team is vested with full responsibility for success of the project, then dissolved when the project is completed. In most high-tech organizations the concept of a team—small groups of experts in their own domain, formed to work together on a problem that requires expertise from all their domains—is a standard organizational management tool.

Successful high-tech organizations view organizational structure and design as tools to help organizations function, not as ends in themselves. In rapidly changing environments, an organization should have a toolbag of possible organizational structures that can be called into play depending on the context. A variety of forms may be in existence at any instant in time to deal most effectively with the issues of the moment. It can get messy, but if everybody understands how the things work and how to operate within them, then it can work fine. Of course, people need to be educated and trained how to operate in teams, task forces, and their variants.

It should be apparent that the organizational characteristics I have described for high-tech organizations, with their flexible structure and loose "permission structures," will allow the constitutive human agents plenty of latitude to use their uniquely human cognitive skills for adaptation and innovation, but how do we encourage them to, and how do we make their efforts coherent, so that chaos and disorder will not result? When we humans are properly challenged and motivated, we love solving problems and coming up with new ideas. On the other hand, we are entwined in many relationships other than the job in our complex society and we have a limited attention span, so we tend to fall into the habit of doing just enough to get by in some of our relationships in order to focus our creative energies on the more interesting or challenging ones. So the art of high-tech management is quite simple to state—do not let the organization believe "business as usual" is good enough to get by.

High-tech managers know that in order to succeed the organization must always be prepared to cope with changes in its external environment, and they know that the nature of external change is relatively long periods of slow change followed by short periods of very rapid change (punctuated equilibrium). They also know that there is no "one best way" to do things; the capacity of human cognition to adapt and learn is essentially unbounded, and the inexorable advance of technology continually offers new possibilities. Further, they know that creative change can come in two flavors, I’ll call them adaptation and innovation. Adaptation is incremental improvement by continually trying small changes in an activity or process, keeping those that work. Innovation is dramatic improvement by seeing different ways to approach the problem. Adaptive processes are low-risk, low-return per step, but over time lead to major returns through compounding, while innovation is high-risk, high return per step. So high-tech managers push their organizations to continually experiment with new ways to do things, blending both adaptive and innovative efforts. If there is no external threat or opportunity to focus on at the moment for a particular part of the organization, then focus on continually improving the quality and efficiency of current activities.2 [Of the plethora of management "fads," the one that I believe best explicates the principle of continuous improvement, can be applied to all functions of all organizations, and will stand the test of time, is Total Quality Management (TQM) and its variations.] And of course the best way to succeed is, rather than to react to the changes in environment, to create by your own innovations those changes which will be viewed by your competitors as problematic changes in their external environment.

In an organization which demands constant experimentation, it is essential to realize that most experiments fail, but the ones that succeed more than make up for the costs of the failures. So the organizational incentive and reward systems (both financial and psychological) must reflect this; success should be handsomely rewarded, but most importantly, failure should not be punished. Only failure to experiment should be punished. The attitude toward failure should be "that didn’t work as we had hoped; what have we learned from that, and what shall we try next?"

To those who are steeped in the old paradigm of organizations as machines to be designed, and managers as "controllers" of the machine, it might seem that the kinds of organizations I’ve described above cannot achieve sufficient coherent, coordinated action to carry out their purpose. Surely allowing people to constantly experiment and change things, not to mention having the latitude to sometimes act to further their selfish personal objectives over those of the organization, must result in chaos. How do you control such an organization? What is the "glue" that hold things together? The answer is easily understood when organizations are viewed from a complex adaptive system perspective.

Humans, as a consequence of our evolutionary history, are naturally inclined to cooperative activities. We could not have survived as a species otherwise. And our capacity for self-organization is obvious everywhere; John Holland gives the example of New York city: "New Yorkers of all kinds consume vast stocks of food of all kinds, with hardly a worry about continued supply.. yet [the city] has no central planning commission that solves the problem of purchasing and distributing supplies, nor does it maintain large reserves to buffer fluctuations; their food would last less than a week or two if the daily arrivals were cut off."[7] All that is required for any human organization to function coherently is a shared understanding of purpose and incentives sufficient to convince its members that their own best interest is served by orienting their behavior toward the purpose of the organization. So the glue that holds high-tech organizations together is a clearly communicated sense of purpose or mission, as well as a clearly communicated and constantly reinforced set of values governing behavior, together with incentive systems such as profit sharing and stock options to orient collective behavior towards accomplishment of the purpose.

So far I have been describing high-tech organizations from an inward-looking perspective—their organizational structures and management practices. Equally important is the manner in which they approach their relationships with the external world. They pay close attention to their interactions with external organizations—customers, suppliers, competitors—and think hard about the changes they see. They especially go to great lengths to involve their customers in determining features of new products. In light of our four properties of complex adaptive systems, it is easy to see that these characteristics would be essential for survival. An organization must recognize that it is not only a complex adaptive system itself, but that it is also a member of a higher-order complex adaptive system comprising itself and the other firms with which it interacts. Evolution, co-evolution, and punctuated equilibrium mean the company’s world is not fixed, but constantly changing, and not only that, but the exact nature of changes in behavior of other agents and introduction of new agents is not only unpredictable, but unknowable [8]. In the next section, I will argue that one of the most effective ways for an organization to come to understand its world as it changes is through especially productive relationships called generative relationships. When viewed this way, two trends in high-tech behavior that go against the prescriptions of classical economics can be understood.

First, the nature of business contracts is changing. The classical economics approach leads to the view that contracts should attempt to envision all possible future eventualities, and specify a priori the rights of each party in each case. This leads to interminable arguments and negotiations and lost time. But if the detailed nature of outcomes is not only unpredictable but unknowable, and time is the scarce commodity, why bother? After a long period in which they trended towards increasing sophistication and complexity, contractual arrangements have become more simple, and are based much more on trust. Rather than becoming obsessed with trying to make sure that a contract covers all the bases and protects them against every eventuality, however unlikely, successful organizations take the attitude that things will be worked out as situations arise. The emphasis in this environment is to stop wasting time and get on with the business at hand. How can an organization be responsive and keep pace if it is worrying about and spending time on contract details with low probabilities of relevance?

A second trend is toward relationships with suppliers. While classical microeconomics would predict that firms would buy only from the lowest bidder with no loyalty, high-tech firms (and now many non high-tech firms) are doing just the opposite. Instead of playing off numerous suppliers against each other, these firms are reducing the number of suppliers but forming much closer relationships with the selected set. In a three year period, Motorola reduced the number of its suppliers by 70% [9]. Reallocating relationship management efforts to fewer more intensive partners rather than many arms-length partners has several advantages, such as lower transaction costs, but a crucial one is the ability better understand and adapt to changes through collective discourse and joint action.

Do the lessons of the evolution of high-tech organizations have any applicability to other sectors, such as the military or government institutions in general? I am not sure but I think probably so, for a couple of reasons. First, on the metaphorical level, both the public and private sectors deal with complex adaptive systems and organizations; people who are working together to accomplish some purpose. We also know that most of the creativity and innovation in human activities comes from cross-domain analogies. That is, you develop a deep understanding of patterns of cause and effect in one domain of experience, perhaps physics or chemistry; you see patterns in another domain that at an abstract level resemble those of the first domain, so by analogy you hypothesize about cause and effect in the second domain. One could hope that using the experience of the private sector in adapting to the rapid pace of technological advances and applying it to a military organization is just such a cross-domain analogy. Second, all organizations have certain things in common. Both private sector organizations and the military need organizational structures, methods of coordination, information systems. They each have the need to recruit and train people, supply them with tools and materials, and deal with management issues, all in rapidly changing environments. Practices that are effective for these in the private sector may well be effective in the public sector.

Strategy under Complexity

In the previous section I have described some characteristics of high-tech organizations that enable them to adapt to rapid environmental change by constant experimentation and adaptation. But what about planning, in particular long-term strategic planning? Most high-tech organizations do not attempt detailed planning beyond 12-24 months, and even those plans are viewed as a guideline around which to organize and coordinate the activities of people, subject to frequent adjustment as events unfold. When it comes to longer term time horizons, they are highly skeptical of the standard methodologies of strategic planning that have been in vogue for many years, which are based on a presumption of underlying order that can be inferred. While many go through the motions of using the standard techniques, they place much more emphasis on the "gut-feel" of the key thinkers in the organization when it comes to decisions about major long-term investments and directions. In a human world that exhibits the properties of complex adaptive systems, implying unpredictable and unknowable novelty, is there any benefit to be gained by trying to think about the longer term? How should one go about it? My colleague David Lane and I [10] have developed some partial answers to these questions, and in this section I want to briefly introduce some of our ideas.

First it is useful to make some distinctions about foresight horizons; how far ahead the strategist thinks he can foresee events. Foresight horizons can be clear, complicated, or complex. To illustrate, I quote from the paper by Lane and me:

Picture an 18th century general perched on a hill overlooking the plain on which his army will engage its adversary the next day. The day is clear and he can see all the features of the landscape on which the battle will be fought—the river and the streams that feed it, the few gentle hills, the fields and orchards. He can also see the cavalry and infantry battalions positioned where he and his opponent have placed them, and he can even count the enemy guns mounted in the distant hillsides. The battle tomorrow will consist of movements of these men across this landscape, movements determined in part by the orders he and his staff and their opposite number issue at the beginning of the day, and in part by the thousands of little contingencies that arise when men, beasts, bullets and shells come together. While he cannot with certainty predict the outcome of all these contingencies, nor of the battle that together they will comprise, he can be reasonably sure that one of a relatively small number of scenarios he can presently envision will actually come to pass...The general’s uncertainty has a clear terminal date: tomorrow, when the battle will have been fought and either won or lost...the general knows what he is uncertain about: not only which side will win the battle, but also the kinds of events that will turn out to be decisive...The general has a clear foresight horizon.

Now think about a U.S. cavalry column marching through an uncharted section of Montana in the early 1870s. The commanding officer cannot know the location of the nearest river or whether there will be an impassable canyon on the other side of the hills looming over his line of march. Nor does he know where the Indian tribes who inhabit this country have established their camps or whether they are disposed to fight should he come into contact with them. He knows the general direction in which he wants to take his men, but it would not pay him to envision detailed forecasts of what the next days might hold, because there are too many possibilities for unexpected things to happen. Instead, he relies on his scouts to keep him informed about what lies just beyond his own horizon, and he stays alert and ready for action. He in confident that he will recognize whatever situation he encounters, when he encounters it...The cavalry commander is concerned with getting his troops to their assigned destination, so his time horizon of relevant uncertainty is a matter of days or weeks...He could frame propositions about almost anything likely to be relevant to the completion of his mission, but it would amount to a very long list, most items of which would turn out not to matter anyway... The cavalry commander’s foresight horizon is complicated. He know the kinds of thing that might happen , but because of the sheer number of possible geographical, meteorological and social combinations it is difficult to imagine them all at the outset of his mission. Nonetheless, he thinks he knows how to find out about the eventualities that are likely to matter in time to respond efficaciously to them.

Finally, imagine the situation of a Bosnian diplomat in early September 1995 trying to bring an end to the bloodshed in his country. It is very difficult to decide who are his friends and who his foes. First he fights against the Croats, then with them. His army struggles against an army composed of Bosnian Serbs, but his cousin and other Muslim dissidents fight alongside them. What can he expect from the UN Security Forces, from the NATO bombers, from Western politicians, from Belgrade and Zagreb, from Moscow? Who matters, and what do they want? On whom can he rely, for what? He doesn’t know—and when he thinks he does, the next day it changes. The Bosnian diplomat has an uncertain time horizon—there is no end in view. He would be at a loss to name all the actors and events that could affect the outcome of the drama of which he is a part. In fact, no one could name them, because in the working out of the drama new actors keep getting drawn in and they create new kinds of entities—like the Rapid Deployment Force or the abortive Moscow Peace Meeting—that simply could not be predicted in advance. The Bosnian diplomat’s horizon is certainly complicated, but there is more to it than that. Unlike the cavalry commander, his problem is not just to negotiate his way a fixed landscape composed of familiar if presently unknown features. The social landscape through which he moves constantly deforms in response to the action he and others take, and new features, not previously envisioned or even envisionable, emerge. Since his destination is always temporally beyond his current foresight horizon, the connection between what he does and where he is going is always tenuous and hence ambiguous. Inhabiting as he does a world of emergence, perpetual novelty and ambiguity, the Bosnian diplomat’s foresight horizon is complex.[10]

If an agent has a clear foresight horizon, then the time-honored methodology of Decision Analysis is appropriate for strategic planning. Determine the set of possible strategies, assess the outcomes of each and their probabilities, evaluate the relative value of each outcome, and calculate the optimum strategy. In complicated foresight horizons, the hopelessly large number of possible outcomes and the difficulty of assessing probabilities, let alone assigning values, forces strategic planning to become the organization of processes of continuous experimentation, exploration, and rapid adaptation. This is the motivation for the recent spate of literature about ‘the learning organization’ [11,12]. But in complex horizons the very structure of the world in which the agent exists is undergoing change. What does strategy mean when "your world is under active construction, you are part of the construction crew, and there is not any blueprint"?[10]

Complex foresight horizons emerge when cascades of change occur in agents, artifacts, and their relationships. These changes have two dimensions: cognitive and structural. By cognitive change we mean changes in interpretation by human agents of their world; who the other agents are and what they do, what artifacts there are and what their function and value is, and what agents interact in what ways with which other agents and with what artifacts. By structural change we mean the emergence of new types and instances of agents and artifacts (and the disappearance of others), coupled with new and rearranged relationships between agents and artifacts. These two dimensions are coupled by reciprocal causality—cognitive reinterpretations of the world lead to new actions by agents which lead to new relationships with other agents and artifacts; and structural changes observed and experienced by agents lead to new interpretations of their world. Thus we have a dynamic feedback loop, and we know that feedback loops can be stable (negative feedback) or unstable (positive feedback). In our context, instability means constructive positive feedback, the emergence of new entities and relationships, resulting in complex foresight horizons.

Although human agents can passively observe aspects of their world with which they do not directly interact and make interpretations, the most important stimulation to reinterpretation comes through action, in particular interaction with other agents. Every agent engages in relationships—recurring patterns of interaction—with a relatively small number of other agents, and it is through these relationships that the agent can learn best about its world and changes to it. Most relationships—for example, impersonal buy-sell market interactions—do not permit the kind of information exchange that can stimulate innovative reinterpretations of the world by the participants. But a few relationships—Lane and I call them generative relationships3 [for extended discussion of generative relationships, see [8] and [10].]—do stimulate cognitive reinterpretations of the world by their participants, leading to the cascades of change of constructive positive feedback. So the dynamic feedback process that generates complex foresight horizons goes like this: generative relationships induce cognitive reinterpretations of the world which lead to actions which cause structural change which generates possibilities for new generative relationships.

To illustrate the dynamics of generative relationships, I can cite an example from my experience in building ROLM Corporation. After six years in the mil-spec minicomputer market, we diversified into the telephone PBX market in 1975. This was a billion-dollar market dominated by AT&T which had been stable for a long time. The other participants in this market, all large companies, had long-established presence and market shares that had been relatively stable for decades. But two things had happened to destabilize the status quo. First, digital technology for switching and control was evolving very rapidly but these complacent competitors continued to use old electro-mechanical switching and control technology in their products. Second, the industry had become deregulated by the Carterphone decision in 1968, allowing PBXs to be marketed competitively, rather than available only through the local telephone service monopoly. By 1974 nothing much had happened; it was still a billion-dollar market dominated by AT&T. ROLM developed a digital, computer-controlled PBX which turned out to be wildly successful. While there were no doubt many contributing factors to our success, one of the most interesting involves the changes over time in the perceptions we and our customers held about the artifact and our relationship to it. These changes were fundamental to the co-evolution of the market, the players, and the technology.

The advanced technology introduced in the ROLM PBX could be considered analogous to the biological evolution of the nervous system. While it initially provided new useful functions, it also provided a flexible platform for further evolution of radically new functions. In the biological sphere, the evolution of the nervous system to the human brain is measured in millions of years, while in the time frame of functional evolution of technology in the human world is measured in years or even months. In the initial version of the ROLM PBX, we programmed the embedded control computer with all the functions we thought could be useful to organizations, such as least-cost routing of long-distance calls, automatic dialing, and call detail recording. We knew there might be other functions that would turn out to be useful, but we had no idea what they might be. ROLM focused on telecommunications managers of the very largest companies as a key market segment. We did that because these large firms were very sophisticated with large telecommunication budgets and centralized decision making, and the new functions of our product had greater relative benefit for them than for smaller companies. It was initially very hard to make inroads with these individuals, because they were used to buying whatever AT&T told them to (a situation very similar for early innovators in the computer industry who had to compete with IBM). But we felt that if we focused intensely on serving these customers we could convince them. A few tried our product and found that not only did it do what we said it would do, but they saved so much money that they became heroes in their own companies. But more importantly they began to relate to us other needs that they had. They would come back and say, "We’ve been thinking of buying this automated call distribution system from Collins, but we only have fifty people handling incoming calls to our service department, whereas the Collins system is designed for thousands of airline reservation agents and is uneconomical for us; why couldn’t you program these kinds of features into your PBX?" We asked our engineers how hard that would be to do, and realized it would be fairly easy to do. We went around to some other customers and explained the application, and it turned out almost everyone of them had had very similar needs. So within a year we incorporated an Automatic Call Distribution function in the next version of the product, and it was very successful. And other ideas began to emerge from our customers, such as centralized attendant service, that drove the continued transformation of the product. The results of these intense working relationships between manufacturer and consumer not only evolved the nature of the product, they also transformed our company and the whole PBX industry.

As a result of these interactions, we changed our idea of what ROLM was all about. We were not developing telephone systems, we were developing line-of-business communication systems for reducing costs and increasing the efficiency of organizations. With that new mindset, all kinds of new possibilities opened up about new applications of our technology. And as we introduced a steady stream of new innovations every few months, we continued to distance ourselves from the old-line competitors, who were accustomed to product cycles of many years.

The telecommunication managers who were early adopters of the ROLM PBX enjoyed transformations as well. Because of the benefits they delivered by embracing the new technology, they gained credibility and promotions within their companies. They previously had a relatively low level position on the corporate ladder—much lower than the MIS manager—because with the old technologies there wasn’t much possibility of innovation. Their promotions began to put them on a par with MIS managers. At the annual meetings of the professional association comprising their peers —the International Communications Association—they would give formal presentations about the productivity-enhancing capabilities of the ROLM PBX, and later over drinks in the bar describe to their peers the personal rewards and recognition they had won. This led to a surge of interest by other large companies, which then stimulated interest by smaller companies who look to the larger companies for leadership. The rapidly increasing revenues to ROLM in turn allowed an even higher level of investment in continuing product innovation, and this virtuous cycle of "increasing returns" [13] allowed ROLM to emerge as a major force in a transformed industry.

In a span of five years, an unknown company, ROLM, had captured the second largest market share in a market that had been stable for decades. By 1980, three companies—AT&T, ROLM, and Northern Telecom—had 80% of the U.S. PBX market. All of the other original major PBX manufacturers had been eliminated or marginalized, and a handful of new players had footholds. Interestingly, the same three entities (ROLM is now owned by Siemens) continue to dominate the market in 1996, sixteen years later. This provides a good example of punctuated equilibrium; the PBX market was stable for many years, then underwent a transition over only 5 years to its present stable state. I believe a key reason for ROLM’s success was developing generative relationships with its key customers, leading to positive feedbacks that accelerated its rate of product innovation and market acceptance.

If we interpret the ROLM story using the abstract terms of the dynamics generating complex foresight horizons, it goes like this. A small agent (ROLM), looking for new opportunities, sees a possibility of using an artifact about which it has deep knowledge—small computers—as the basis for making an improved version of another artifact—a telephone switching system (PBX). After developing the new artifact, the company must form new seller-buyer relationships with a class of unfamiliar agents—large companies with significant telecommunication costs. After forming a few such relationships, some of the relationships become generative. The telecommunication managers of the large companies, having demonstrated the hoped-for large cost savings with the new PBX, receive unaccustomed accolades from their organizations, and realize that the possibility exists to continue to beneficially transform their own identity in the organization by additional applications of the new artifact. They turn to ROLM with requests for enhancements to the PBX to enable the new applications. This leads ROLM to realize that the possible functionality of the artifact it has designed is much broader than just traditional PBX features, implying a much larger market, and it focuses its key engineering talent to pursue these ideas. ROLM reinterprets its mission (identity) as providing business communication systems, not just telephone systems. At the same time, the successes of the early customers spread via their professional relationships with peers in other companies, leading to an exponential increase in new agent relationships for ROLM (some of which also generate new ideas), providing rapid increase in revenue, which in turn allows increased investment in product enhancements. This virtuous circle leads to explosive growth for ROLM and rapid capture of market share. So we see that the generative relationships led to reinterpretation of self-identity by both ROLM and the telecommunication managers, as well as reinterpretation of the functionality of the new artifact, and these in turn led to structural change (dramatic shifts in market share) in what had been a stable market, as well as major changes in the perception of what functionality constituted a modern business voice communication system.

But why didn’t other old-line players react quickly to preserve their position, and why didn’t other computer-knowledgeable companies with superior resource bases muscle their way into this newly energized market? I believe the answer is that in order to survive and prosper during cascades of change, an organization must: first, be embedded in the generative relationships that cause the changes, and second, be capable of focused, rapid action in response to perceived opportunities. If an agent is in a position to comprehend change only by observing the end structural results rather than the earlier cognitive shifts that led to the structural results, it will have great difficulty moving rapidly enough to succeed. And if those agents who are in the generative relationships do not exploit the opportunities quickly, they are at risk of eventually being displaced by those with larger resources. Although the old-line PBX competitors had existing relationships with their customers, these relationships did not become generative for two reasons, size and complacency; lulled into a false sense of security by years of "business as usual," they did not feel a need to maintain continual intense discourse with their customers, and when they belatedly realized the implications of computer-controlled PBXs, they were too big and bureaucratic to respond quickly enough. Similarly, by the time potential new competitors outside the industry recognized the structural changes taking place, it was too late to insert themselves in an effective way.

Strategic Practices

The foregoing discussion and story provide the basis for a partial answer to the question of what strategic thinking means when an organization finds itself with a complex foresight horizon. Lane and I [10] suggest that such organizations should put into place two strategic practices: populating the world, and fostering generative relationships. Populating the world is a process of discourse to construct and interpret a representation of the external environment—who and what are the agents and artifacts that constitute the world, what are their relationships, and how are they changing? This entails, of course, gathering information from many sources, but most importantly, pattern recognition and interpretation. Fostering generative relationships is an attempt to secure a position in the world which will enable the organization to recognize and influence emergent opportunities. Based on the organization’s current interpretation of its world, it invests resources in existing relationships that have the potential for—or already demonstrate—generativeness, and it seeks to establish potentially generative relationships with new agents.

If it is true that generative relationships are an important aspect of success in complex horizons, then how are they fostered? After all, I have argued that their benefits are unforeseeable and that not all relationships become generative. The generative potential of a relationship can be analyzed by assessing the degree to which the following essential preconditions are met:

There must be aligned directedness. This simply means the participants have a compatible orientation of their activities; for example, one party is interested in using an artifact, the other in supplying it. Or two nations are concerned about defending themselves from a common potential aggressor. Or the Army and Navy are each trying to develop weapon systems on limited budgets.

Second is heterogeneity; the participants have to differ in key respects. They have to have different competencies, different access to other agents or artifacts in the world, or different points of view about how to think about agents or artifacts. In a sense they need be an interdisciplinary team. An example is the Santa Fe Institute’s Business Network, with some thirty members from business, government, and military. They meet with the scientists, two or three times a year, in order to get exposure to new ideas. They are gathered around a common set of ideas and metaphors about complex systems and a number of novel joint projects have emerged. Of two nations concerned with defense, one has a strong navy, the other a strong army, and each has alliances with other nations.

Mutual directedness is needed. It is not enough to have synergistic interests and differing perspectives, but the agents must seek each other out, and develop a recurring pattern of interactions. You have to have an interactive relationship to begin with, before it can become generative. There are many kinds of natural role-based relationships, such as supplier-buyer or trading partner, and these are usually the seeds of generative relationships. Generative relationships can arise serendipitously from existing natural relationships, or an organization may seek out new relationships based on its perception of generative potential. Within an organization, management may perceive the possibility for generative potential between two sub-organizations, and create incentives for mutual directedness. For example, if a portion of the budget for new weapons systems were earmarked for common sub-systems or technology developed jointly and endorsed by all three services, it might induce new relationships that could turn out to be highly generative.

The fourth precondition for generativeness is permissions. The individuals interacting in the relationship have to have appropriately matched permissions or authorizations from their respective organizations to engage an open and extensive level of disclosure and dialogue. Without this, the generative potential is blocked. In relationships between organizations with multi-level reporting hierarchies, generative potential is greatly enhanced by establishing regular discourse between the responsible individuals at each hierarchical level with their peers in the other organization. This not only allows quick adjustment of mis-matched permissions and response to action opportunities, but provides even more heterogeneity in the relationship because of the differing range of perspective and knowledge inherent at the various hierarchical levels.

Finally, there must be action opportunities. As ideas for new possibilities arise from continued interaction, there has to be the opportunity to engage in joint action based on the ideas. Relationships that involve only talk do not last long or deeply affect agent identities. Action itself more clearly reveals the identities of the participating agents and enhances the development of mutual trust. It is interesting to consider what might have happened if the U.S. and USSR, with an aligned directedness toward strategic arms limitations, had chosen to proceed not by sending a small team of negotiators to Geneva to spend years sitting across a table talking at each other (preceded by years of arguments on the size and shape of the table), but rather by a process of taking small joint actions such as destroying a handful of weapons with mutual inspection, then another step based on the experiences of the first, and so on. Another reason for action opportunities is that new joint competences can emerge only out of joint action, and these joint competences lead to changes in agent identities and even to the emergence of new agents.

Although I have framed this discussion of generative relationships in terms of interactions between independent organizations such as companies or nations, the ideas are just as valuable applied to dependent organizations, such as departments within a company. Dramatic innovations can come about when functional sub-organizations depart from the norm of viewing their dependence relationships with other sub-organizations as a necessary evil that gets in the way of accomplishing their purpose, and instead develop discursive dialogs oriented around understanding each other’s problems and initiating actions to improve the efficiency of both. One of the key responsibilities of management should be the maximization of the generative potential of relationships, both within his own (sub-)organization and with other (sub-)organizations.

Conclusion

The rapid rate of change in our modern world, driven by the enabling technology of the transistor, has strained the ability of many organizations to function effectively. One reason is that the old intellectual framework presuming a stable, or at least slowly changing, economic social order—upon which the conventional management wisdoms are based—does not apply in rapid transition periods such as we now experience. This paper has argued that applying the metaphors of the science of complex systems to the human world can provide a new intellectual framework for the management of organizations, within which the successful attitudes, methods, and practices that have evolved in the high-tech sector over several decades can be seen to make sense. High-tech organizations understand that time is the scarce commodity and people are the key asset, which has resulted in common practices: loose permission structures rather than strict operating procedures; reliance on informal and temporary organization structures rather than rigid hierarchies; incentives that reward experimentation and don’t punish failure; reliance on a shared sense of mission and set of values to ensure coherence; and simple contracts and close relationships with other organizations. There is a high likelihood that at the proper level of abstraction, these practices can be applied to organizations in all sectors which face rapid change, including the military and international relations.

The prospect of unpredictable and unknowable events and emergent entities may seem to make the concept of long-term strategic planning useless. But an understanding of the mechanisms by which such changes come about—reciprocal causation between human organizations reinterpreting their world and acting accordingly, and structural change emerging from aggregate actions causing organizations to reinterpret their world—leads to practices that can allow organizations to proactively improve their prospects for success. Two such practices have been discussed: populating the world—the continual reinterpretation of the organizations, institutions, artifacts and relationships that comprise one’s environment; and fostering generative relationships with selected organizations to maintain a position from which to participate in the construction of the emerging world.

Acknowledgment

I am indebted to Dr. David Alberts for valuable assistance in the preparation of this paper.

End Notes

1. P. Anderson, K. Arrow, D. Pines (editors), The Economy as an Evolving Complex System, Addison-Wesley (1988).

2. T. Kuhn, The Structure of Scientific Revolutions, University of Chicago Press, Chicago (1970).

3. Rosenau, James N. "Many Damn Things Simultaneously: Complexity Theory and World Affairs."

4. P. Drucker, Post-Capitalist Society, HarperCollins Publishers, New York (1993).

5. "World Economy Survey," The Economist, September 28, 1996.

6. "Netspeed at Netscape," Business Week, February 10, 1997.

7. J. Holland, Hidden Order, Addison-Wesley (1993).

8. D. Lane, F. Malerba, R. Maxfield, and L. Orsenigo, "Choice and Action," Journal of Evolutionary Economics 6, 43-76 (1996).

9. "Tying the Knot," The Economist, May 14, 1994.

10. D. Lane and R. Maxfield, "Strategy under Complexity: Fostering Generative Relationships," Long Range Planning 29 (2), 215-231 (1996).

11. P. Senge, The Fifth Discipline: the Art and Practice of the Learning Organization, Doubleday/Currency, New York (1990).

12. D. Garvin, "Building a learning organization," Harvard Business Review, July-August, 78-91 (1993).

13. B. Arthur, "Complexity in economic and financial markets," Complexity 1 (1), 20-25 (1995).


| Complexity Index | Part Three Index | Chapter 9 |