Internet governance in light of Egypt’s network shutdown

On January 27, 2011, in the midst of massive anti-government protests in Egypt, the Egyptian government initiated an unprecedented measure to disrupt network access to, from and within the country. Several network monitoring sites reported that at around midnight, almost all routes to Egyptian networks were withdrawn from the Internet’s global routing table (see also here and here). This sounds like some pretty heavy-duty techy stuff, but in fact, it isn’t, and the Egyptian government’s action has exposed a very serious weakness in Internet governance which has tended to fly under the radar. The Egyptian government’s action involves what is called the Border Gateway Protocol (BGP), the communication protocol that certain “gateway” routers use to announce to each other what traffic should be delivered to them. Most of us casual Internet users are never aware of gateway routers or BGP, but without them the Internet would be useless to us because our communications wouldn’t know where to go.
BGP is a notoriously insecure protocol. There have been several incidents where misuse, either intentional or not, of BGP has resulted in significant disruptions of global network traffic. What happens is, a gateway router will start to announce that it can handle traffic for which it is not authorized. Network traffic destined for the servers that the gateway router is falsely announcing is forwarded to that router and eventually hits a dead end. One well documented incident, often called the Chinese Internet Hijack, occurred in April of last year when a Chinese router started announcing a huge number of addresses for which it was not responsible. Consequently, Internet traffic ended up being forwarded to the Chinese network space, even though it was intended to go somewhere else entirely. On a global scale, few Internet users were affected by the Chinese Internet Hijack because, although the Chinese gateway router was announcing the addresses, it wasn’t doing it in a way that presented itself as an optimal route for the traffic. This was not the case in another incident in 1997 involving a gateway router in Florida. In this incident, the gateway router involved announced the entire Internet routing table. Additionally, the router announced that it could deliver traffic to these destinations in a single hop. The result was that this route was seen as the optimal route for all Internet traffic. The entire Internet died that day.
The gateway router system has long been operated on a system of mutual trust. In some ways it could be said to be a throwback to the early days of the Internet when there was a strong sense of camaraderie among the small group of users and, thus, security wasn’t considered a significant issue. Several BGP-related incidents have highlighted the need for stronger security and that need is being addressed. However, the actions of the Egyptian government raise entirely new questions regarding BGP; should it be possible for a national government to control Internet routing information, the way the Egyptian government has done? This is a serious Internet governance issue. Much more so than issues regarding control over Domain Name Servers (DNS), that have tended to dominate Internet governance debates. The Egyptian government has set a precedent for how governments can effectively shut off the Internet and, as long as the system is unchanged, the Egyptian method is likely to be used again.
So I leave you with the following questions:
Should national governments be free to claim control over core Internet traffic management systems?
If not, how should the problem be addressed?
a) Leave it up to nation-states to implement preventative provisions on the basis of their legal systems?
b) Revamp the global routing system to make it impossible for a nation to do what the Egyptian government has done?
c) Other (suggestions are welcome)?

Posted in Information Society, Internet, Leapfrogging development, Technology foresight | 1 Comment

Anticipated ICT futures and education: Background info for a scenario construction exercise.

This essay is part of the preparatory phase for a scenario construction exercise regarding ICT and education. It looks at future developments of ICTs and how they will affect education. Sorry about the missing references. I will add the reference list sometime soon.

ICTs and educational institutions: A growing divide?
The transition from the manageable, stationary desktop computers of the 1990s to the highly personalized compact laptops, cell phones and smart phones of today, has dampened educators once favorable views of ICTs. Educators increasingly view ICTs as distractive technologies, both inside and outside of the classroom, and readily support any number of measures to limit their use, including blanket bans on the use of any personal ICTs in educational institutions (Kolb, 2008). This is in stark contrast to many educators’ more optimistic views in the late 1980s and 1990s, when ICTs were generally embraced as transformative technologies that would modernize education (Bigum & Kenway, 2005; Collis, Veen & De Vries, 1993). But, as the rate of development of ICTs quickly overtook educational institutions ability to transform themselves in the early 2000s, educators have found themselves competing with realities for which they are wholly unprepared and mostly unable to accommodate. Young people today live out a considerable portion of their social lives online, through computers and cell phones, which influences their values, norms, knowledge, and ways of knowing (Lenhart, Ling, Campbell & Purcell, 2010; Ito, Horst, Bittanti, Boyd, Herr-Stephenson, Lange, et al., 2008). Unsurprisingly, since educational institutions largely deal with these same attributes, young people’s interactions online affect the way that they perceive the role and purpose of educational institutions and there is an obvious conflict between the collaborative and open nature of young people’s online realities and the structured and disciplined realities of most current educational institutions. It can be reasonably assumed that educators’ current attitudes toward ICTs in education are, in part, a reflection of this growing divide between young people’s perceptions of educational institutions and their own perceptions. Since there is nothing to indicate that the development of ICTs is in any way subject to the will of educational policy makers and educators, how this growing divide develops in the long-term future depends on future ICT developments and how educational institutions react to those developments. In this chapter we will examine two foreseeable development trajectories for ICTs that are likely to produce unexpected consequences for educational institutions given their current approaches to ICTs. The first is the increasing miniaturization and personalization of ICT hardware. The second is the increasing difficulty of managing access to information resources. Continue reading

Posted in Education, ICTs, Information Society, Leapfrogging development, Technology foresight | Leave a comment

Scenario planning: considering long-term impacts of decisions made in the present

In previous posts I said that I would, at some point, start posting some things here regarding methods used in technology foresight. Here is the first of these.
It might make more sense to start with delphi methods, which really are the most commonly used in large-scale foresight programs, but delphi methods are complex and difficult to get right. In fact, many of the criticisms that were directed at early attempts at technology foresight that I have written about in previous posts really had to do with the lack of proper rigor and objectivity in delphi-type studies. Nevertheless, I’m going to save the delphi methods for later. I will start with scenario planning, which is a very commonly used method, and rigorous when done right, but can be equally suitable for small-scale projects as well as larger comprehensive foresight programs.
The aims of scenario planning are, basically, to construct plausible case descriptions for the future, based on certain variables that are expected to produce a significant amount of uncertainty for future planning. Thus scenario planning seeks to minimize uncertainty by extending developments identified in the present to the future. Most importantly, the purpose of scenario planning is not to present visions of preferred or less preferred futures, but merely to provide decision makers with reasonable expectations of what might transpire given certain types of development over the long-term. In most rigorous scenario planning exercises, there is, in fact, no attempt to evaluate the scenarios generated as that would affect the integrity of the exercise and increase the potential for bias.
This article provides a brief overview of the history of scenario planning, its current state, and methodologies commonly used in scenario planning exercises. I will follow-up in the near future with some scenarios that I have generated focusing on the development of information and communication technologies and its impact on education.
Scenario planning
Modern scenario planning emerged from Herman Kahn’s military planning work for the RAND corporation in the 1950s (Bradfield, Wright, Burt, Cairns & Van Der Heijden, 2005; Lindgren & Bandhold, 2003; Schwartz, 1996). Kahn used scenario planning to help the military prepare for the uncertainties of the cold war period, when high tensions between the U.S. and the Soviet Union fueled global political instability. Kahn’s scenario planning allowed the U.S. government to prepare for a range of possible situations at a time when appropriate immediate responses to unexpected situations could make the difference between cautious restraint and global nuclear war. In the 1970s, scenario planning spread beyond the military sector, to other public policy areas and private businesses. The Royal Dutch/Shell company was especially influential in adapting scenario planning to private sector needs and created a market for consultancy firms that facilitated the further spread of scenario planning in private and public sectors (Van Der Heijden, Bradfield, Burt, Cairns & Wright, 2002; Lindgren & Bandhold).
There is some disagreement as to how, precisely, scenarios should be defined. Lindgren and Bandhold (2003) list several scholars’ definitions of scenarios that, despite some subtle differences, reflect a general agreement that scenarios are not forecasts or projections of desired futures (Lindgren & Bandhold). Scenarios make use of trends and developments in the present to provide subjective descriptions of how these may play out in the medium and long-term future. Snoek (2003), citing Dammers, describes how scenarios differ from other future-oriented planning tools. Dammers (Snoek, 2003) describes a typology of future-oriented planning strategies based on the level of uncertainty inherent in the situation being analyzed as demonstrated by the number of available facts and theories pertaining to the situation. Scenarios are appropriate when there is a relatively high level of uncertainty, where facts pertaining to the situation are few and applicable theories are many. In this, they differ from more analytically oriented strategies, where analysts and decision makers can draw from numerous relevant facts to provide justified projections of futures, and from situations where few available facts and few applicable theories result in highly speculative visions of possible or desired futures.
Dammers’ (Snoek, 2003) typology provides a useful gauge for evaluating when scenarios are appropriate and determining their purpose. Firstly, scenarios are appropriate when a situation is directly or indirectly influenced by rapid or unforeseeable changes, and where conflicting approaches are brought to bear on the situation. Secondly, scenarios are intended to provide decision makers with viable descriptions of future situations, rising from this uncertainty, to highlight possible problems that can be addressed in the present. When scenarios are deemed appropriate, they are presented as a number of plausible futures, each dependent on possible courses of action, rather than a single anticipated or desired vision. Scenario planning and generation does not include evaluation of the scenarios. Rather, they are presented as equally viable consequences of decisions and developments over the medium and long term. As such, scenarios are intended to generate discussion rather than promoting particular courses of action (Van Der Heijden, Bradfield, Burt, Cairns & Wright, 2002).
Methodologies
Modern scenario planning has developed rapidly over the past few decades. Along with refinements regarding the definition of scenario planning, several methodologies pertaining to problem identification and scenario generation have been introduced and refined. Because scenarios have been applied to diverse areas of planning and decision-making, some methodologies have become highly specialized and may not be applicable to all situations. Bradfield et al. (2005) list the following areas in which scenario planning has been applied to illustrate the current diversity of the field (p. 796-797):
  • crisis management: ex. civil defense exercises;
  • the scientific community: ex. effectively communicating scientific models and theories pertaining to environmental change;
  • public policy makers: ex. to involve multiple agencies and stakeholders in policy decisions;
  • professional futurist institutes: ex. to communicate critical trends;
  • educational institutions: ex. to create future learning environments;
  • businesses: ex. for long range planning.

The diversity illustrated by the list above demonstrates the importance of choosing a methodology appropriate to the needs and unique circumstances surrounding the situation that it will be applied to. In this section we consider some commonly used scenario planning methodologies and how they apply to the situation under consideration.
Given the Rand Corporation’s historical role in the development of scenario planning, its influence on the scenario planning methodologies is not surprising. Early scenario planning within Rand was exclusively applied to military planning. At that time, Rand worked on a contract basis for U.S. defense branches. The contracting parties generally presented Rand analysts with potential crises, ex. wars or conflicts in specific areas or of a specific nature, and the role of the Rand analysts was to work backward to derive an explanation, or possible explanations, for the crisis as ordered (DeWeerd, 1973). It was not until the early 1970s that this backward approach was challenged by DeWeerd, working at Rand at the time. In a Rand Paper, DeWeerd described an alternative method that considered current contexts and evolving trends to describe potential future crises rather than backtracking from a hypothetical crisis to determine its causes. In the decades since then, Rand has developed highly quantitative methods using calculated probabilities to generate scenarios based on situations and emerging trends in the present (Camm & Hammitt, 1986). While this methodology provides a high level of reliability, the resource requirements are beyond the reach of many organizations that can benefit from scenario planning.
Modern scenario building is more oriented toward the forward-looking contextual approach described by DeWeerd (1973) than Rand’s earlier backtracking approach. It was precisely along these lines that Schwartz (1996) developed a qualitative methodology for scenario building for Royal Dutch/Shell in the early and mid 1970s. Schwartz especially emphasizes the identification of “driving forces” that are likely to produce change conditions. Relevant driving forces depend on the situation or issue being addressed, but are likely to include social, economic, political, environmental, and technological forces. The goal of identifying relevant driving forces is to observe and expand on emerging trends that are likely to affect the situation or issue being addressed. Finally, the most relevant driving forces and emerging trends are applied to the context of the issue to generate scenarios.
Schwartz’s (1996) qualitative approach, focusing on driving forces, forms the basis for most contemporary scenario building methodologies. However, some currently used methodologies differ depending on the perceived goal of the scenario building activity. For example, Lindgren and Bandhold’s (2003) and Van Der Heijden, et al.’s (2002) methodologies differ from Schwartz’s methodology in several respects, primarily because of the business-orientation of the authors’ scenario building activities. Lindgren and Bandhold’s strategic TAIDA framework, for example, goes beyond what is usually expected of scenario building exercises to include the generation of desired scenarios and an action component to consider short term goal setting in response to generated scenarios. Van Der Heijden, et al.’s STEEP framework also reflects a strong business orientation in the driving forces that are emphasized. These include economic factors such as, fiscal policies and taxation, and social factors such as, demographics and taste. Although the STEEP factors can certainly be applied to most situations in modern Western market-driven societies, they are likely to be beyond the scope of targeted scenario building exercises in many fields.
Of more general interest in Van Der Heijden et al.’s STEEP framework is the process of identifying and categorizing significant driving forces and “polar outcomes” (2002, n.p.). Polar outcomes, in this sense are not necessarily direct opposites, but should reflect the consequences of developments relevant to the scenario context. For example, the proliferation of wireless information technology may result in considerable costs for developed countries due to the need to replace existing infrastructure while it presents a cost-effective means for implementing IT in developing countries where no traditional infrastructure existed before. Thus, the outcome of widespread adoption of wireless IT may have very different consequences depending on the context in which it is adopted.
The goal of determining driving forces and polar outcomes is to identify areas of critical uncertainty. By categorizing forces and outcomes based on their predictability and potential impact on the situation of concern, we identify the potential developmental combinations that are likely to result in unexpected outcomes. It is these unexpected outcomes that are of primary interest for the scenario building exercise since they are least likely to be anticipated. Snoek (2003) employs this method in determining scenarios for the future development of teacher education in Europe. Snoek uses a two-dimensional matrix to evaluate possible scenarios based on two continuum variables derived from an analysis of relevant driving forces. The specific scenarios to be developed are based on points in the matrix. In Snoek’s case, these points simply represent the four quadrants of the two-dimensional matrix. In some cases, further analyses of specific points in the matrix may reveal variable combinations that are more interesting for the scenario building activity than others.
Example of a scenario decision matrix using two dimensions:
scenario_matrix.png
In deciding which scenarios are of interest, we would plot points on the above matrix corresponding to various compositions of the A & B dimensions. Usually, scenario planning exercises will generate a number of alternatives. For example, we might decide that realistic and interesting scenarios would be based on the extremes in each matrix compartment, i.e. high A/high B, low A/high B, high A/low B, and low A/low B. This would result in four distinct scenarios that describe circumstances given these compositions. Realistically, extremes are usually not likely, so the scenarios will be based on varying levels of each component. What points to designate for scenario generation is based on in-depth analysis of likely developments over the long-term.
Most scenario experts suggest that an even number of scenarios be constructed and that they correspond to varied levels of each dimension. This is to avoid perceptions that scenarios depict more or less preferred futures. For example, if we were to construct three scenarios based on high A/high B, moderate A/moderate B, and low A/low B, it is very easy to for the human mind to categorize these as representing a scale of preference, ex. highs are good, moderate is an acceptable compromise, and lows are bad. The scenario planner wants to avoid creating these kinds of perceptions because the intent is not to evaluate the scenarios. Rather, the intent is to provide fuel for constructive dialogue about how decisions in the present may affect the future.
References
Bradfield, R., Wright, G., Burt, G., Cairns, G., & Van Der Heijden, K. (2005). The origins and evolution of scenario techniques in long range business planning. Futures, 37, 795-812.
Camm, F., & Hammitt, J. K. (1986). An analytic method for constructing scenarios from a subjective joint probability distribution. Santa Monica, CA: The Rand Corporation.
DeWeerd, H. A. (1973). A contextual approach to scenario construction. Santa Monica, CA: The Rand Corporation.
Lindgren, M., & Bandhold, H. (2003). Scenario planning: The link between future and strategy. New York City: Palgrave Macmillan.
Schwartz, P. (1996). The art of the long view: Planning for the future in an uncertain world. New York City: Doubleday.
Snoek, M. (2003). The use and methodology of scenario making. European Journal of Teacher Education, 26(1), 9-19.
Van Der Heijden, K., Bradfield, R., Burt, G., Cairns, G., & Wright, G. (2002). The sixth sense: Accelerating organizational learning with scenarios. (Kindle version ed.). New York City: John Wiley & Sons.

Posted in Education, ICTs, Leapfrogging development, Technology foresight | Leave a comment

Allow ICT in exams? Denmark’s simple approach

As information and communication technology (ICT) use has increased in education, there has been some discussion in recent years concerning the use of ICT during school exams. One of the main concerns regarding the use of ICT in education in general is the perceived potential for cheating. This certainly is a real concern, especially regarding exams which are intended to give a reliable measure of what students have learned. In some countries, however, steps are being taken to increase the use of ICT in general and in exams. See information here from the Danish Ministry of Education, on experiences in Denmark of increasing ICT use in the classroom, on assignments, and even on exams.

There’s plenty of research that highlights students’ indiscriminate use of available information resources which sometimes borders on, or are blatant examples of, plagiarism. Yet, there are many reasons for increasing the use of ICT which arguably outweigh the potential for misuse, especially to link students’ everyday use of technology to their school use to promote the use of ICT as a learning tool in all facets of life. The struggle to accommodate technology in education has resulted in a range of approaches, some of which are highly questionable given the aims, ex. blocking access to the Internet or specific web-based services that students commonly use outside of school. Lassen suggests a highly effective alternative approach that has been used to allow ICT use on exams in Denmark – almost deceptively simple but sensible when you think about it – just change the questions from “when and who” to “how and why”. This is an example of changing the practice to accommodate the technology rather than the more common approach of trying to change the technology to fit the practice.

Posted in Education, ICTs, Information Society, Internet, Technology foresight | Leave a comment

Technology foresight: Learning from early attempts

Irvine & Martin’s (1984) seminal Foresight in Science: Picking the Winners, provides fascinating insights into early attempts at technology foresight. The authors describe a comparative case study of technology foresight in France, (what was then) West Germany, the US and Japan in the 1970s and early ’80s. The study reveals some of the trials and tribulations of early foresight programs and some of the successes. Despite being almost 30 years old, the book is just as relevant today for those interested in technology and research and social policy as it was when it was published.
The book shows how early attempts at technology foresight suffered from a limited understanding of the relation between technology and society, unsophisticated methodologies, and a lack of confidence in the technology foresight approach itself. Most of the foresight activities described relied too much on experts from scientific communities, who were often motivated more by their concerns for their own fields rather than the broader concerns that the exercises were intended to address. As a result, many of the foresight exercises described failed to produce useful results or the results had little impact on policy.
Japan is presented as a notable exception to the rather unimpressive efforts in the other countries studied. In fact, the authors seem at times to be so impressed with Japanese foresight activities that it might in itself constitute a justifiable criticism of the book. Nevertheless, the Japanese foresight activities described are deserving of much of the authors’ praise. The Japanese approach to foresight, starting in the 1960s and extending through, and beyond, the 1970s, comes much closer to what contemporary foresight activities strive to be. The Japanese based their foresight activities on a very deliberate vision of a technology-driven society and, therefore, were more comprehensive in their approach as regards their methodologies and the scope of the exercises. The Japanese used a bottom-up approach, mixed quantitative and qualitative methods, and included a broad range of key stakeholders, including the scientific community, and social and cultural experts. Participants were significantly invested in the projects and the outcomes had considerable impact on Japanese R&D and social and economic policy, and continues to do so to this day.
The authors derive, from the Japanese study, a hint of a framework for technology foresight activities that has been referred to as the five c’s:

  • communication between disparate groups of participants
  • concentration on the long-term future
  • coordination of future R&D activities
  • creation of consensus on future priorities
  • commitment to results to ensure that they become self-fulfilling

This framework is further developed in the authors’ later book, Research Foresight: Priority-Setting in Science (Martin & Irvine, 1989), which I will write about in a later post.
References
Irvine, J. & Martin, B. R. (1984). Foresight in science: Picking the winners. London: Francis Pinter.
Martin, B. R. & Irvine, J. (1989). Research foresight: Priority-setting in science. London: Pinter Publishers.

Posted in Leapfrogging development, Technology foresight | Leave a comment

Technology foresight: The difficulty of peering into the future

As I mentioned in an earlier posting, technology foresight emerged from the future studies and technology forecasting fields and seeks to apply outcomes from these fields to policy- and decision-making (see the earlier posting for a discussion on what technology foresight is). A major concern for policy-makers is the quality of outcomes from future studies and technology forecasting fields. Regrettably, there are a number of self-proclaimed “futurists” that have cast shadows of doubt over these fields. An important consideration for any technology foresight activity is then how to ensure reliable information and how to identify questionable predictions or forecast. In this post, I’m going to focus on the negative. I discuss a few high profile futurists who tend to be very prominent on the Internet and other very accessible resources, whom scholars have been very critical of, and for good reasons. The point I wish to make is that when considering what to base future-oriented policy decisions on, it is important to evaluate the methodology and substance of the informational inputs used. This posting is not intended to be critical of the futures and forecasting fields as such. Indeed the futures and forecasting fields have developed a number of rigorous and objective methods to produce highly reliable data. I will discuss those in future postings when I get more into methodologies.
Ray Kurzweil is perhaps one of the world’s best known “futurists” whom many commentators and scholars have questioned. Among Kurzweil’s well known predictions are the imminent “technological singularity” and his related prediction, i.e. as one consequence of the technological singularity, that humans will overcome death in the near future. Kurzweil bases his predictions on an extrapolation of the well known Moore’s Law (ML) concerning the number of transistors that can be placed on an integrated circuit. For Kurzweil, ML demonstrates an example of exponential development which results in increasingly accelerated technological advancement. From Moore’s Law, Kurzweil derives his “Law of Accelerating Returns” (LAR). Kurzweil’s LAR does two things in regards to ML: it extends ML to technologies other than transistors, and it equates increasing transistor density with increased technological capability. Both of these assumptions are highly dubious. Furthermore, Kurzweil assumes that ML is on par with a natural law while many scholars and commentators have suggested that the reliability of ML is more a product of its inadvertent normativity rather than any descriptive properties, i.e. that ML pressured technology developers to sustain the “law’s” predictive power rather than the other way around (van Lente & Rip, 1998; Gardiner, 2007). In any event, Kurzweil’s attempt to derive various future predictions based on his reading of ML makes for some very questionable futurism.
Whereas Kurzweil’s weakness lies in his theoretical assumptions, John Naisbitt, the “Megatrends” guy, has mostly been criticized for his methodology. John Naisbitt has written, or co-written, a series of books named Megatrends this-and-that since the early 1980s. Most of them, if not all, have been bestsellers. His first book, titled Megatrends: Ten new directions transforming our lives, was criticized for the fact that Naisbitt did not reveal much about his methodology other than that it was based on “content analysis” of a bunch of newspapers and such. He has addressed the methodology issue to some degree, but it hasn’t really changed his approach; his books are still mostly a summary of things that are being discussed in select information outlets with only a superficial analytical component at best. Naisbitt tends to be a dedicated optimist; it seems that all trends indicate a fabulous future that we’re all just going to love (for ex. he seems to miss things like recessions and terrorism)! The result is that Naisbitt’s books tend to come across more like propoganda, striving for self-fulfillment, than realistic visions of the future. The most striking example of this is his and Doris Naisbitt’s recent China’s Megatrends: The 8 pillars of a new society which has been widely criticized for presenting an overly optimistic government-sanctioned view of modern China. For the Naisbitts’, it would seem that dissidence is almost non-existent in China and that the Chinese are grateful that the government assumes the tedious, but necessary, role of separating the wheat from the chaff on the Internet for them, or what we (at least I) would usually refer to as censorship.
A third somewhat visible “futurist”, and collaborator on Kurzweil’s “Singularity University”, is Dr. James Canton of the Institute for Global Futures who published The extreme future: The top trends that will reshape the world in the next 20 years in 2006. It contains such prescient items such as that in the future criminals will create fake-bank webpages to steal our information (if I really thought that this was not a current concern in 2006 I almost would have deserved to have my information stolen). His blog on the Institute for Global Futures website is a real gem. For example, on May 10, 2009, Canton posted an item about the “Ghost Hack that is now embedded in about 100 million computers, perpetrated by an Asian secret organization”. Scary stuff! Unless you’ve seen the 1995 anime film Ghost in the shell, of which this is essentially the synopsis. Canton is not a very prolific blogger (I guess he’s too busy on the speaker circuit). His most recent blog post is from November 24, 2009 and warns us about the megacity explosion; “Over 50% of the planet lives in MegCities (sic.) today. We are forecasting over 65% by 2025.” Canton is so way off here that I really have no idea what he is talking about. Megacities are defined as urban areas with a population of 10 million or more (Canton includes this definition in his post). According to UN data from 2009, there are 21 urban areas in the world that meet that criteria. The combined population of these urban areas accounts for, at most, about 8% of the total world population.
There are at least two properties that all of the above futurists (and others of their ilk) display. First is a tendency toward sensationalism. Their predictions are meant to evoke more of an emotional response (whether it be intense optimism or desperate gloom) than a rational reflection on how to plan for the future. The other is that their methodology is mostly limited to selective and superficial environmental scanning that is more oriented toward reinforcing their own theories or sentiments regarding the future than providing objective data. Nevertheless, these authors, if by no other means than their popularity, do demonstrate the increasing recognition of the importance of long-term future-oriented planning. However, they also show us how important it is to critically evaluate any information that is intended to inform policy-making processes.
Gardiner, B. (April 24, 2007). Does Moore’s Law Help or Hinder the PC Industry? Extremetech.com. Retrieved August 29, 2010.
van Lente, H. & Rip, A. (1998). Expectations in technological developments: An example of prospective structures to be filled in by agency. In, C. Disco & B. v.d. Meulen (Eds.) Getting new technologies together: Studies in making sociotechnological order. Berlin: de Gruyter. Available on Google Books

Posted in Technology foresight | Leave a comment