|Up a level|
While it is through creating and marketing products that firms achieve success, there are also significant opportunities for them to create value by exploiting the capabilities they utilize in creating products. However, seeing capabilities in this light demands new ways of thinking about product markets and marketing policies.
Many collective human activities, including violence, have been shown to exhibit universal patterns1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19. The size distributions of casualties both in whole wars from 1816 to 1980 and terrorist attacks have separately been shown to follow approximate power-law distributions6, 7, 9, 10. However, the possibility of universal patterns ranging across wars in the size distribution or timing of within-conflict events has barely been explored. Here we show that the sizes and timing of violent events within different insurgent conflicts exhibit remarkable similarities. We propose a unified model of human insurgency that reproduces these commonalities, and explains conflict-specific variations quantitatively in terms of underlying rules of engagement. Our model treats each insurgent population as an ecology of dynamically evolving, self-organized groups following common decision-making processes. Our model is consistent with several recent hypotheses about modern insurgency18, 19, 20, is robust to many generalizations21, and establishes a quantitative connection between human insurgency, global terrorism10 and ecology13, 14, 15, 16, 17, 22, 23. Its similarity to financial market models24, 25, 26 provides a surprising link between violent and non-violent forms of human behaviour.
Considers the social and economic arrangements that would be necessary for rational mechanisms of exchange and distribution to emerge, function, and remain viable if extreme conditions produced an absence or the severe destruction of an institutional infrastructure and resource endowments.
This report identifies constraints and opportunities for the restoration of economic exchange after nuclear war. Four survival scenarios are postulated based on high or low levels of damage to (1) institutions that signal trading opportunities, reduce transaction costs, and regulate and enforce contracts, and (2) resources that are used to create and define wealth. The four scenarios are Best case, Worst Case, Resource Abundance, and an Institution Intensive case. Discussed in depth are such items as property rights, barter, currency, trust, credit, supply and demand, and trust as related to authority.
This book presents a comparative international analysis of the impact new technology and, more specifically, information technology is having on the most rapidly growing area of economic activity, the service sector. Based on a five year research programme of 39 organizations in Europe, USA and Japan, the book tackles the following key questions: 1) Why and how was the new technology introduced? 2) What impact has it had in terms of employment levels, working practices, economic returns and improvement in service quality? and 3) What new skill requirements have been generated? The book includes case studies from a wide variety of areas such as banking, retailing and healthcare and presents comparative reseach data that allows the reader to draw conclusions on how a significant technological development is being handled in different countries. The book also identifies those political factors which can influence the choice of new technology and which can obstruct the most constructive use of IT.
This paper develops modes of analysis for three major issues in the study of trade unions as organizations. These are, first, their distinctiveness as a discrete type of organization; secondly, the nature of their membership attachment; and thirdly, their twin rationales of representation and administration. The integration of these analyses within a new framework is then pursued. This framework serves to suggest propositions requiring empirical investigation and reference is made to some results from a preliminary study.
We review research on organizations to highlight prevailing and emerging conceptions for embeddedness. An integrated framework that considers the sources, mechanisms, outcomes, and strategic implications of embeddedness is presented. Also, promising research directions for embeddedness approaches, including cross-level issues (such as collective cognition and nesting), as well as issues related to temporality, networks, and methodology are identified.
Academic economists perform an important function in advising politicians and state bureaucrats, lending them epistemological authority.This creates a challenge of institutional design and of professional vocation, of how these experts can combine their commitment to scientific analysis with their commitment towards their governmental patrons. This article examines the case of anti-trust economics, in which government economists are encouraged to remain as academically engaged as possible, so that their advice will be – or appear to be – unpolluted by political or bureaucratic pressures.Yet this ideal is constantly compromised by the fact that the economists are nevertheless government employees, working beneath lawyers. Max Weber’s concept of a ‘vocation’ is adopted to explore this tension, and his two lectures, ‘Science as a Vocation’ and ‘Politics as a Vocation’ are read side by side, to consider this core dilemma of academic policy advisors.
This study compares the discourses around food and the family in popular ‘women’s magazines’ in the UK and Australia in the post World War 2 period.
Advertisements for food, editorials and articles on nutrition, food, family and healthy eating are examined and analysed to map the discursive production of the consumer citizens within the regulatory device of the nuclear family over this period.
Kenya's cut flower industry is often considered a testament to globalization --- a panacea for declining exports that has brought thousands of new employment opportunities to poor rural women. The industry, however, has also come to epitomize the dark side of globalization with booming economic growth lying side by side with human immiseration. In recent years this juxtaposition has spawned a new moral discourse, with images of toxic flower fields and lurid working conditions broadcast into the living rooms of suburban London homes. Such images are part of a new morality of consumption, where consumers, NGOs and global supermarkets aspire to 'save' the African worker from the downside of globalization.
Production of fresh vegetables for export has grown rapidly in a number of countries in sub-Saharan Africa over the last decade. This trade brings producers and exporters based in Africa together with importers and retailers in Europe. Large retailers in Europe play a decisive role in structuring the production and processing of fresh vegetables exported from Africa. The requirements they specify for cost, quality, delivery, product variety, innovation, food safety and quality systems help top determine what types of producers and processors are able to gain access to the fresh vegetables chain and the activities they must carry out. The control over the fresh vegetables trade exercised by UK supermarkets has clear consequences for inclusion and exclusion of producers and exporters of differing types, and for the long-term prospects for the fresh vegetables industry in the two major exporting countries studied, Kenya and Zimbabwe.
Production of fresh vegetables for export has grown rapidly in a number of countries in sub-Saharan Africa over the last decade. This trade brings producers and exporters based in Africa together with importers and retailers in Europe. Large retailers in Europe play a decisive role in structuring the production and processing of fresh vegetables exported from Africa. The requirements they specify for cost, quality, delivery, product variety, innovation, food safety and quality systems help top determine what types of producers and processors are able to gain access to the fresh vegetables chain and the activities they must carry out. The control over the fresh vegetables trade exercised by UK supermarkets has clear consequences for inclusion and exclusion of producers and exporters of differing types, and for the long-term prospects for the fresh vegetables industry in the two major exporting countries studied, Kenya and Zimbabwe.
International cooperation on export controls for technology is based on three assumptions, that it is possible: to know against whom controls should be directed; to control the international transfer of technology; and to deﬁne the items to be controlled. These assumptions paint a very hierarchical framing of one of the central problems in export controls: dual-use technology. This hierarchical framing has been in continual contention with a competitive framing that views the problem as the marketability of technology. This thesis analyses historical and contemporary debates between these two framings of the problem of dual-use technology, focusing on the multilateral Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies. Using a framework of concepts from Science & Technology Studies and the theory of sociocultural viability, I analyse the Arrangement as a classiﬁcation system, where political, economic, and social debates are codiﬁed in the lists of controlled items, which then structure future debates. How a technology is (not) deﬁned, I argue, depends as much on the particular set of social relations in which the technology is enacted as on any tangible aspects the technology may have. The hierarchical framing is currently hegemonic within Wassenaar, and I show how actors that express this framing use several strategies in resolving anomalies that arise concerning the classiﬁcation of dual-use technology. These strategies have had mixed success, and I show how they have adequately resolved some cases (e.g. quantum cryptography), while other areas have proved much more diﬃcult (e.g. focal plane arrays and computers). With the development of controls on intangible technology transfers, a third, egalitarian framing is arising, and I argue that initial steps have already been taken to incorporate this framing with the discourse on dual-use technology. However, the rise of this framing also calls into question the fundamental assumption of export controls that technology is excludable, and therefore deﬁnable.
This article comments on the article "Contested Industry Dynamics," by Tiffany L. Galvin, Marc J. Ventresca, and Bryant A. Hudson, published in the present issue of the journal "International Studies of Management & Organization." Given the lack of contextuality characteristic of ecological research, and that institutional theory only recently has focused on issues of institutional change, the article by Galvin, Ventresca, and Hudson offers a fresh and needed complement to evolutionary and institutional research in organization theory. Moreover, the focus on the cognitive elements and market stories as enablers and obstacles of industry-wide evolutionary processes links this manuscript to research of belief systems. The two industries studied, tobacco and gambling, are industries with a prime interest to attract consumer desire for physical and mental stimulation. Gambling is a cultural phenomenon that is driven by individuals' lust for risk taking and playing, whereas tobacco use is driven by both physical and psychological phenomena. The evolution of tobacco and gambling industries vividly illustrates the importance of the institutional environment on firm survival.
Keywords chronicle and capture cultural change by creating common categories of meaning against diverse local usages. We call this the global-local tension. To test competing theories of this tension, we employ frame analysis of more than 500 journal abstracts over a 25-year period, tracking the spread of business model as an economic keyword generated during unsettled economic times. Analyses reveal the simultaneous adoption of "global" and "local" frames without one supplanting or co-opting the other. The global-local tension is conciliated by providing primacy across communities of discourse to a small collection of frames (i.e., the global presence) while maintaining a plurality of local use within communities (i.e., the local alternative).
Experience with forest management interventions has shown that the design, strategic context and implementation of projects at the local level are key determinants of intervention success. Gaining a strategic understanding of local REDD+ initiatives is therefore important for the further development and governance of the international REDD+ regime. This article reports on an exploratory comparative analysis of 12 REDD+ projects in the Madre de Dios watershed of south eastern Peru. Using a framework drawn from innovation strategy, we focus on the founding and organizational strategies of the different initiatives, thus allowing us to compare across the 12 cases and to explore how these local initiatives link with the emerging national REDD+ architecture in Peru.
Our results point to the importance of hybrid institutional logics, the key role played by highly networked individuals in pushing project-level REDD+ forward, and of understanding the construction of the REDD+ credit value chain as the fundamental innovation taking place; the development of standards, technologies and other norms are complementary to the basic task of defining and reconfiguring roles on this chain. We suggest that decision makers should continue to encourage the ‘bottom-up’ construction of REDD+ as a strategy to encourage innovation and flexibility, and facilitate research into the governance and transnational systemic nature of the emerging value chain.
This article uses a mid-century text to reengage a late-1970s concept to answer a new century question. The authors return to Alvin Gouldner's classic (1954) study Patterns of Industrial Bureaucracy to reexamine the "coupling" concept in contemporary institutionalism in a way that engages the following question: How do new institutional forms emerge? Based on Gouldner's detailed observations of work in a gypsum mine, the authors argue that coupling processes are key mechanisms in the emergence of institutional forms. Examining coupling as a dynamic process and activity helps us to understand how the institution of bureaucracy emerged in the gypsum mine and interacted with previous social orders of authority and control. Gouldner's account of coupling at the mine is a story of formal and informal power struggles and active conflict over meaning, bringing the process of local institutional formation into sharp relief.
Organizational sociologists often treat institutions as macro cultural logics, representations, and schemata, with less consideration for how institutions are "inhabited" (Scully and Creed, 1997) by people doing things together. As such, this article uses a symbolic interactionist rereading of Gouldner's classic study Patterns of Industrial Bureaucracy as a lever to expand the boundaries of institutionalism to encompass a richer understanding of action, interaction, and meaning. Fifty years after its publication, Gouldner's study still speaks to us, though in ways we (and he) may not have anticipated five decades ago. The rich field observations in Patterns remind us that institutions such as bureaucracy are inhabited by people and their interactions, and the book provides an opportunity for intellectual renewal. Instead of treating contemporary institutionalism and symbolic interaction as antagonistic, we treat them as complementary components of an "inhabited institutions approach" that focuses on local and extra-local embeddedness, local and extra-local meaning, and a skeptical, inquiring attitude. This approach yields a doubly constructed view: On the one hand, institutions provide the raw materials and guidelines for social interactions ("construct interactions"), and on the other hand, the meanings of institutions are constructed and propelled forward by social interactions. Institutions are not inert categories of meaning; rather they are populated with people whose social interactions suffuse institutions with local force and significance.
By framing the economics versus environment debate as a mixed-motive situation, opportunities become visible that allow greater benefits to all interests in the debate. Yet, social, cultural, and institutional arrangements frame how these interests see these opportunities, creating a barrier to mixed-motive analyses. In this article, the authors use an institutional perspective to analyze how the economics versus environment debate emerges from institutions as presently structured. They present an analysis of its present framing based on three aspects of institutions-regulative, normative, and cognitive-and consider the prescriptive implications they expose at the managerial and organizational level of action. The authors conclude with an analysis of possible solutions to overcome them.
We present a reactor-by-reactor analysis of historical busbar costs for 99 nuclear reactors in the United States, and compare those costs with recent projections for next-generation US reactors. We argue that cost projections far different from median historical costs require more justification than estimates that lie close to those medians. Our analysis suggests that some recent projections of capital costs, construction duration, and total operations and maintenance costs are quite low—far enough from the historical medians that additional scrutiny may be required to justify using such estimates in current policy discussions and planning.
As countries begin to reassess and expand their energy research and development programs, it is appropriate to understand the likely costs of alternate approaches. Unfortunately, communities of experts have a well-known optimistic bias in their judgments. We review findings from several disciplines that underscore this tendency toward overconfidence as well as some proposed alternatives to incorporate it in public decision-making. We further argue, based on a disaggregated analysis of US nuclear power costs, that incorporating a second-order uncertainty in the shape of the distribution of cost components is essential for capturing important elements of uncertainty in moving forward with expanded energy R&D programs.
Quantifying human group dynamics represents a unique challenge. Unlike animals and other biological systems, humans form groups in both real (offline) and virtual (online) spaces—from potentially dangerous street gangs populated mostly by disaffected male youths to the massive global guilds in online role-playing games for which membership currently exceeds tens of millions of people from all possible backgrounds, age groups, and genders. We have compiled and analyzed data for these two seemingly unrelated offline and online human activities and have uncovered an unexpected quantitative link between them. Although their overall dynamics differ visibly, we find that a common team-based model can accurately reproduce the quantitative features of each simply by adjusting the average tolerance level and attribute range for each population. By contrast, we find no evidence to support a version of the model based on like-seeking-like (i.e., kinship or “homophily”).
The selective publication of clinical trials by the pharmaceutical industry and by academic investigators has long been a contentious area in medicine. Clinicians and scientists such as Iain Chalmers, one of the founders of the UK Cochrane Centre, have worked for over two decades to address the problem of the under-reporting of research, suggesting that the failure to report clinical trials breaks ‘an implicit contract with the patients who had participated in them’, and should in some cases be treated as scientific misconduct (Chalmers, 1990, 20
Decisions about drugs by doctors, patients, and the public depend on a complex framework of industry regulation, professional and public guidance, and transparent publication of the research on which the regulatory framework depends. In the UK, there are two agencies with a national role in this decision-making process. First, a regulator, the Medicines and Healthcare products Regulatory Agency (MHRA), which is akin to the Food and Drug Administration (FDA) in the USA and has statutory responsib ...
Drawing primarily from Selznick's institutionalism, we make a general case for renewed attention to the "mundane administrative arrangements" that underlie the organizational capacity for value realization and a particular case for the study of value-subverting management innovations. An empirical study of "enrollment management" in liberal arts colleges reveals this ostensibly innocuous innovation's value-undermining effects and identifies the organizational and environmental factors that have made these venerable organizations more or less susceptible to its adoption.
Over the years network theory has proven to be rapidly expanding methodology to investigate various complex systems and it has turned out to give quite unparalleled insight to their structure, function, and response through data analysis, modeling, and simulation. For social systems in particular the network approach has empirically revealed a modular structure due to interplay between the network topology and link weights between network nodes or individuals. This inspired us to develop a simple network model that could catch some salient features of mesoscopic community and macroscopic topology formation during network evolution. Our model is based on two fundamental mechanisms of network sociology for individuals to find new friends, namely cyclic closure and focal closure, which are mimicked by local search-link-reinforcement and random global attachment mechanisms, respectively. In addition we included to the model a node deletion mechanism by removing all its links simultaneously, which corresponds for an individual to depart from the network. Here we describe in detail the implementation of our model algorithm, which was found to be computationally efficient and produce many empirically observed features of large-scale social networks. Thus this model opens a new perspective for studying such collective social phenomena as spreading, structure formation, and evolutionary processes.
We study the statistical properties of SIR epidemics in random networks, when an epidemic is defined as only those SIR propagations that reach or exceed a minimum size sc. Using percolation theory to calculate the average fractional size View the MathML source of an epidemic, we find that the strength of the spanning link percolation cluster P∞ is an upper bound to View the MathML source. For small values of sc, P∞ is no longer a good approximation, and the average fractional size has to be computed directly. We find that the choice of sc is generally (but not always) guided by the network structure and the value of T of the disease in question. If the goal is to always obtain P∞ as the average epidemic size, one should choose sc to be the typical size of the largest percolation cluster at the critical percolation threshold for the transmissibility. We also study Q, the probability that an SIR propagation reaches the epidemic mass sc, and find that it is well characterized by percolation theory. We apply our results to real networks (DIMES and Tracerouter) to measure the consequences of the choice sc on predictions of average outcome sizes of computer failure epidemics.
Chinese translation by Zhang Beisheng and Diao Xiaojing.
Over the last decade the flying patterns and foraging behavior of bees have become a matter of public policy in the European Union. Determined to establish a system where transgenic crops can `coexist' with conventional and organic farming, the EU has begun to erect a system of demarcations and separations designed to minimize the extent of `gene flow' from genetically modified plants. As the European landscape is regimented through the introduction of isolation distances and buffer zones, bees and other pollinating insects have become vectors of `genetic pollution', disrupting the project of cohabitation and purification devised by European authorities. Drawing on the work of Michel Serres on parasitism, this paper traces the emergence of bees as an object of regulatory
scrutiny and as an interruptor of the `coexistence' project. Along with bees, however, another uninvited guest arrived unexpectedly on the scene: the beekeeper, who came to see his traditional relationship to bees, crops, and consumers at risk. The figure of the parasite connects the two essential dynamics described in this paper: an escalation of research and the intensification of political attributes.
Anticipating risks has become an obsession of the early twenty-first century. Private and public sector organisations increasingly devote resources to risk prevention and contingency planning to manage risk events should they occur. This book shows how we can organise our social, organisational and regulatory policy systems to cope better with the array of local and transnational risks we regularly encounter. Contributors from a range of disciplines - including finance, history, law, management, political science, social psychology, sociology and disaster studies - consider threats, vulnerabilities and insecurities alongside social and organisational sources of resilience and security. These issues are introduced and discussed through a fascinating and diverse set of topics, including myxomatosis, the 2012 Olympic Games, gene therapy and the recent financial crisis. This is an important book for academics and policy makers who wish to understand the dilemmas generated in the anticipation and management of risks.
• Presents a new analytical take on risk regulation issues by focusing on the role of anticipation • Multi-disciplinary approach gives a broad ranging view of the varying social science debates about risk regulation, including an historical view on risk society debates • Includes detailed case studies covering a range of different regulatory d
On the seventh day of the trial of The State of Tennessee vs. John Thomas Scopes, William Jennings Bryan was cross-examined by Clarence Darrow. What ensued was one of the most famous exchanges in American legal history, and a constant referent in the struggle between religious Fundamentalists and defenders of academic freedom and natural evolution. Many saw in Darrow’s interrogation of Bryan a moment of revelation, a dramatic instantiation of the irreducible conflict between ‘ancient religion’ and modern reason’. This paper returns to the transcript of the trial to examine what form of incommensurability was produced in the course of that celebrated examination. Contrary to the conventional interpretation of the Scopes trial as a moment of mutual untranslatability between irreconcilable positions, an analysis of the conversational structure of the exchange suggests a surprisingly robust and productive dialogue. It reveals a form of difference sustained by a shared grammar of mutual accountability.
In the 1970s a group of social scientists attempted to create a new, more democratic form of work organization aboard the Norwegian merchant ship Balao. To do so they redesigned the physical structure of the ship to facilitate the emergence of a participatory shipboard community. This paper revisits the journeys of Balao as an example of the potential and limits of experimental democracies. It describes the processes of social miniaturization implemented in the organization of Balao's work processes, the particular ergonomics of democracy that resulted from the new model layout and the vexing question of the project's landfall: its limited impact as a ‘demonstration experiment’ for a more democratic organization of production.
Law and bioethics have traditionally expressed an elective affinity. Bioethics has often spoken "in the language of the law," or at least in a pidgin that the law can easily understand, and bioethicists have conceptualized their principles and arguments in ways that make them amenable to legal translation. However, there has always been a tradition of bioethical reflection, what I describe as "self-contained bioethics," that is deeply suspicious of its own readability to legal forms of interpretation; a tradition for which proximity to the law and a consideration of the actual circumstances of legal action compromises the purity of ethical inquiry.
The article examines the work of President Bush's Council on Bioethics on human cloning as an example of this type of bioethical inquiry. In its 2002 report "Human Cloning and Human Dignity: An Ethical Inquiry" the Council professed to keep specific legal and policy options at arm's length to better explore the ethical significance of the issues at stake without being "skewed" by considerations of practical expediency. The article compares the form of bioethical advice produced by the Council with the opinion on human cloning that its predecessor, the National Bioethics Advisory Commission (NBAC), issued in 1997. In NBAC's report the discussion of the paths of legal action was thoroughly and explicitly integrated into the moral analysis.
Ultimately, the President's Council resorted to the law, understood as an instrument of absolute prohibition, to translate its purified moral argument into social reality in the form of a ban or a moratorium. Yet, by not using legal considerations to focus and refine its bioethical inquiry, and instead resorting to the law merely as a tool of proscription, the Council guarantees that its advice remains as controversial in the short term as ineffective in the long run.
Provoking a conversation among a small group of people gathered in a room has become a widespread way of generating useful knowledge.1 The focus group is today a pervasive technology of social investigation, a versatile experimental setting where a multitude of ostensibly heterogeneous issues, from politics to economics, from voting to spending, can be productively addressed.2 Marketing is the field in which the focus group has acquired its most visible and standardized form, as an instrument to probe and foretell economic behaviour by anticipating the encounter of consumers and products in the marketplace.3 But whether they are used to anticipate consumer behaviour in a laboratory-like setting, or to produce descriptions of political attitudes, conversations elicited in the 'white room' of the focus group are relevant to a striking range of objects of social-scientific inquiry.4
The observation of contrived groupings of research subjects in 'captive settings' is of course a familiar source of knowledge in the social sciences, but there is something peculiar to the focus group as a research technology. In focus groups, knowledge is generated in the form of opinions. Moreover, a group dynamic is used to bring into existence a series of relevant individual opinions; the peculiar form of social liveliness of the focus group is meant to 'produce data and insights that would be less accessible without the interaction found in a group' (Morgan, 1988: 12). Both the productive qualities and methodological quandaries of the focus group originate in its special form of liveliness. The peculiar politics and epistemology of a focus group conversation derive from the tension implied in using a group to engender authentically individual opinions. Moderators are in charge of resolving this tension: they must make the conversation conducive to the expression of private and idiosyncratic views, while preventing the focus group from rising to the status of a 'collective;' they are called to structure a process of interaction conducive to the elicitation and elucidation of the most private of views, while reducing to a minimum the residuum of 'socialness' left over from the process. As a professional group moderator describes it:
We talk to ourselves all the time. Most of these inner thoughts never surface. They reflect the same kind of internal dialogue we have when we stand at a supermarket shelf to select paper towels or stop to take a closer look at a magazine ad for a new cell-phone service or decide whether to use a credit card to pay for gas. Our running commentary is often so subliminal that we often forget it's going on. As a focus group moderator, I reach out to consumers in my groups and try to drag that kind of information out of them and into the foreground. What I do is a kind of marketing therapy that reveals how we as consumers feel about a product, a service, an ad, a brand. (Goebert, 2002: viii)
Researchers hope to externalize the silent 'running commentary' of consumers by means of an intently managed group discussion, to translate a series of inaudible monologues into a visible conversation. They provoke an exchange so as to bring to light the inner qualities of consumers.
Knowledge about people is extracted from the opinions elicited from them – opinions that are freely expressed by the subjects, yet structurally incited by the setting.5 Those opinions are then selected, categorized and interpreted by the focus group researcher and fed into production and marketing strategies. 'Illustrative opinions' are filtered from the wealth of talk generated in the discussion, to be quoted verbatim or paraphrased in the research reports circulated to clients and other relevant audiences. Thus, opinions generated in the 'white room' are read, interpreted, and discussed by managers and marketers who were not present in the original conversation and are in no position to directly assess their authenticity or relevance. The statements produced in the unique environment of the focus group enter a long chain of quoting and rephrasing, and reverberate into other actors' market strategies. The ultimate product of a focus group conversation is a series of tradable opinions – statements that are generated in an experimental setting but can be disseminated beyond their site of production. Opinions elicited from focus group participants thus help constitute particular marketplaces.
Producing opinions of such value and mobility is a highly complex technical process. A focus group can generate a multitude of objects that, while seemingly identical to relevant opinions, are in fact radically different kinds: false opinions, induced judgments, or insincere beliefs, all of which appear profusely in the course of a focus group discussion – especially in a poorly run one. These deceptive statements must be sorted out and expunged so as not to lead researchers and their audiences astray. The task of the moderator is to manage the focus group discussion so as to limit the proliferation of irrelevant or inauthentic viewpoints; to foreground tradable opinions against the background noise that is inevitably generated in the experimental situation.
The purpose of this chapter is to draw attention to some of the strategies utilized by focus group moderators to carry out this task of extracting tradable opinions out of experimentally generated conversations. In so doing, we can regain a proper appreciation of the extent to which categories such as 'relevant opinion' or 'consumer preference' are problematic – and not simply or primarily to the external observer, but to the actors who are professionally trained to elicit and recognize them, the focus group moderators.
My account will be limited in a number of important ways. The manufacture of opinions in a focus group starts with the assembling of a group of adequate research subjects and a meeting with one or more moderators, but the 'focus group chain' comprises a long sequence of exchanges and analyses beyond this initial encounter. This chapter, however, will only investigate the initial experimental moment, when research subjects and moderators come together in the physical setting of the focus group 'white room.' Moreover, I will analyse this encounter solely from the perspective of the moderators: my analysis is based on the moderators' own technical literature – the training manuals, methods handbooks, autobiographical accounts, and other documents in which they lay out their own philosophy of 'good practice' and a portrayal of the 'good moderator.' I do not attempt to examine the focus group discussion from the point of view of the research subjects, nor will I draw extensively on analyses of the patterns of interaction between subjects and moderators that actually emerge in a focus group, a dimension of the focus group encounter that others have studied at some length (Myers, 1998 and 2004; Myers and Macnaghten, 1999; Puchta and Potter, 1999 and 2003). The chapter is thus limited to descriptions of the craft of moderation that professional moderators have put into writing.6 Through this literature, I try to reconstruct an ideal moral epistemology of moderation. In particular, I try to capture the political constitution of an experimental setting in which individual attitudes are elicited and market behaviour is routinely anticipated.
The chapter is organized around three themes, all of them topics that social scientists have frequently raised in relation to the production of scientific knowledge under experimental conditions: 1) the distinction and balance between naturalness and artificiality in the focus group setting, and the embodiment of this distinction in the moderator's skills and abilities (or, rather, in the accounts that moderators give of their own craft); 2) the co-production of knowledge and particular forms of social order, or the political constitution of the focus group – a constitution that ideally, I will argue, takes the form of an isegoric assembly; and finally, 3) the role of material artifacts and the physical arrangement of the setting in the organization of the 'focus group chain' as a technology of knowledge production. The chapter concludes with a call to make the production of opinions a proper object of sociological investigation, in the same way that the creation and circulation of knowledge has long occupied a central place in the agenda of sociological research.
Functional foods and foods derived from genetically modified organisms represent two forms of intervention in the design of foodstuffs that have given rise to distinct political and regulatory dynamics. In Europe, regulatory agencies have tried, unsuccessfully, to affix a definitive legal meaning to these categories of food artificiality. This incomplete process of legal disambiguation has gone hand in hand with the delegation of the responsibility for overseeing new products to consumers, who are asked to continuously consider and assess the qualities of foods when making their choices in the marketplace. In the case of genetically modified foods, we have witnessed strategies of avoidance premised on the consideration of genetic modification as a blemish on the conventional character of foodstuffs. Functional foods, on the other hand, are increasingly mobilized in practices of naturalistic enhancement. What both examples have in common is the open-ended character of their respective regulatory regimes, and the continuous prodding of consumers to involve themselves more intensely in the weighing of their food choices. The result is a particular mode of market activism that we describe as restless consumption.
Over the past decade, a new structuralism has begun to emerge in organizational theory. This exciting new research program draws inspiration from the social structural tradition in sociology, but extends that tradition by more broadly conceptualizing social structure as comprised of broader cultural rules and meaning systems as well as material resources—revealing the subtleties of both overt and covert power. Building on the insights of Bourdieu and related work in social theory and cultural sociology, new structuralist empirical research focuses on concrete manifestations of culture in everyday practice and has pioneered the measurement of cultural aspects of social structure using a variety of relational methods. In this essay, we revisit mid-century social structural approaches to organizations, review the development of organization theory as a management subfield that increasingly focused on instrumental exchange, highlight key aspects of the new structuralism in organizational theory, and discuss promising new research directions.
This article examines how social movements contribute to institutional change and the creation of new industries. We build on current efforts to bridge institutional and social movement perspectives in sociology and develop the concept of field frame to study how industries are shaped by social structures of meanings and resources that underpin and stabilize practices and social organization. Drawing on the case of how non-profit recyclers and the recycling social movement enabled the rise of a for-profit recycling industry, we show that movements can help to transform extant socio-economic practices and enable new kinds of industry development by engaging in efforts that lead to the de-institutionalization of field frames.
The comparative institutionalist approach to differences in national business systems necessarily highlights variations in the workings of contemporary capitalism. Less emphasis is given to similarities in historical events leading to the institutionalization of eventual forms of governance. These must include a common underlying ideational orientation of political elites to develop an industrial base from which to expand GDP. This urge to industrialize has led to a second common phenomenon — the importance of the family- or personally controlled business group (FBG). The alliance between political state and family business group is evidently a long-standing feature of national business systems. This paper suggests that such relational capitalism might also be seen as dangerously dependent on national communities that are homogenous in respect to ethnic identity. In South-East Asia, relations between the state and FBG can exacerbate inequities among multi-ethnic communities and provide institutionalized blockage to technological innovation. However, a different element of this process between ASEAN and earlier industrializers is the presence of foreign MNEs, which intrude upon state–FBG coupling and provide new options in the formation of local markets for labour and capital.
The social embeddedness of the guest multinational enterprise (MNE) is presented as a multi-layered series of interfaces between expatriate managers and agencies and actors within the host state. In contrast to the unilinear and intentional development of strategic relations between the parties described in much of the literature on international business, the author seeks to demonstrate that relations are segmented by a diversity of micro- and macro-social and political considerations. Not least among these are differences of life chances among ethnic groups in the host country and by careers within the guest MNE. The study that provides the basis for these observations is presented as a ‘high-context’ deep description of roles and actors within the head offices of 20 European MNEs and within their affiliates located in Brunei Darassalam, Singapore, Malaysia and Thailand.
Leaders in government, the mass media, and party political manifestos all insist that political representation should be present in every level of industry, from the shopfloor to the boardroom. Yet the traditional form of this representation has been free collective bargaining rather than joint consultation and mutual power sharing. It is not enough for management to simply ask employees their opinions on key issues. Workers must see that their suggestions are recorded and actually reflected in later actions of the company. This feedback is essential.
This paper begins by describing an attempt to evaluate the contribution of the overseas affiliates of 20 European parent multinational enterprises (MNEs), based in four South-East Asian countries; Malaysia, Singapore, Brunei Darussalam and Thailand. This was done through semi-structured interviews carried out by the author in parent headquarters, regional offices and affiliated plant locations, together with state institutions in the host nations. The visits were repeated over a 2-year period from 1997 to 1999. The resource-dependency model used to structure the research focused on two characteristics of the affiliates. The first was whether the contribution of the affiliate was to high-value-added locally designed products or to low-value, low-design products. A second dimension was provided by the place held by the plant on local/global supply chains. An initial hypothesis was that distinctively different modes of organizational learning would emerge from the conjunction of nationally differentiated parent MNEs and the institutional context of the host country and would, therefore, shape the manner in which dependencies were operationalized. In practice, two factors seemed more important in shaping the business strategy and style of the affiliate. The first was the nature of the operational technology, particularly the sunk costs of local investment, and the degree to which these ‘captured’ the MNE for the national economic development strategy of the host government. The second was the political economic context of both the parent and affiliate, and the manner in which these caused the dissipation or enhancement of managerial and technical capabilities. In essence, the focus of the study shifted from one in which the MNE and host state were treated as holistic actors interacting along a single common dimension, to one in which the political incorporation and social embeddedness of the former within the context of the latter was accomplished through multiple and socially segmented networks. The question of the extent to which the MNE can act autonomously to shape either its internal or external structure might, therefore, be taken as a matter for political and ethical judgement as much as for market forces.
THIS paper sets out a brief summary of the analysis of industrial relations systems that has emerged from the work of scholars observing the British situation over the past twenty years. In particular it focuses upon the 'consensus-convergence model' favoured by American academics in the 1950s and secondly, upon the 'informal-formal divergence' model put forward by a group of Oxford scholars in the 1960s. Both models emphasize institutional aspects of the system: the needs and aspirations of the actors are seen as part of the input of the system largely in so far as as they involve conflict or disorder. The output of the industrial relations system is seen to be rules, the most important of which are the procedures by which disputes may be resolved and individual grievances may be handled. The production of such rules depends on the support forthcoming through 'a sufficiently high degree of consensus among those whose interests are most affected by their application'.-"^
Much of the empirical material upon which such models rest is derived from a short-term historical analysis of the institutional output of rules and actions stemming from the system under examination. The paper suggests that greater attention should be paid to the actors' needs which provide an input to the system and to the nature of the consensus required for the operation of existing institutions. It maintains that a longer term model of occupational change as related to market, technological and organizational factors, may prove to be more viable than past models in predicting the concomitant modifications in the institutions of industrial relations.
This paper sets out to examine the assumption made by radical economists that internal labour markets formed by large scale corporate employers provide a source of labour market dichotomization between `core' and `peripheral' employment. It criticizes the assumption that internal labour markets can be treated as a culturally neutral phenomenon emerging from the demands of technical rationality. Since, by definition, the boundaries of internal labour markets are institutionally defined, their forms and rationales display a cross-national diversity which indicates a difference in employer strategies and employee responses to the historical course of technological innovation. In particular it suggests that the struggle for task control over the mode of production represented in the creation of new occupations has, in the Anglo Saxon culture, been more likely to take place at the point of production, whereas in France it has been expressed in overtly class terms and in modern Germany in the bureaucratic control systems adopted by a corporate pluralistic state.
This paper sets out a brief summary of the analysis of industrial relations systems that has emerged from the work of scholars observing the British situation over the past twenty years. In particular it focuses upon the 'consensus-convergence model' favoured by American academics in the 1950s and secondly, upon the 'informal-formal divergence' model put forward by a group of Oxford scholars in the 1960s. Both models emphasize institutional aspects of the system: the needs and aspirations of the actors are seen as part of the input of the system largely in so far as as they involve conflict or disorder. The output of the industrial relations system is seen to be rules, the most important of which are the procedures by which disputes may be resolved and individual grievances may be handled. The production of such rules depends on the support forthcoming through 'a sufficiently high degree of consensus among those whose interests are most affected by their application'.-"^
THIS paper sets out to examine the main features of the contingency model of participation put forward by Walker. It examines the model from a methodological perspective referring to experience of participative techniques of management to bring out the value-laden nature of the exercise and the difficulties on achieving 'rigour' in relation to the criteria for 'success' set out by Walker (op. cit.), French and Wall and Lischeron. Finally it refers to two paradigms of participation. One is that of 'constitutional pluralism' in which the participant is assumed to have only a narrow instrumental approach to industrial democracy. The other is that of 'primitive democracy' in which the individual is normally assumed to maintain high and sustained levels of awareness and involvement once having achieved to political 'maturity'. It concludes by adopting the former more cynical view of the world.
In neo-classical economic theory labour is a commodity and the ultimate value of the employer's services is determined by the sales value of the product of these services: the cost of supply reflects both the disutility of work for the recruit and his equalisation of net advantages between jobs. For modern labour economists the assumption that entrepreneurs require identical inputs of labour and the new recruits will therefore possess similar skills (the conditions of free competition) is an unrealistic one. Hence segmental labour market theory has grown out of the need to explain differences between shared needs and commonalities within each group of consumers (employers) on the one hand and suppliers (employees) on the other. In this way it has been possible to carry on assuming the existence of perfect competition on both sides of the market within the boundaries of labour markets thus defined.
Describes the manner by which the telecommunications sector emerged historically within different national business systems. National policy agendas; Description of the structure of the cross-national telecommunications industry; Structure of the telecommunications equipment supply sector.
Human Relations was founded in 1947 as a collaborative transatlantic project between the Tavistock Institute of Human Relations in London and the Research Center for Group Dynamics at the Massachusetts Institute of Technology. Its objective was to encourage theoretical and methodological contributions to the social sciences and to promote their practical application to solve community problems. This article traces the development and evolution of the journal and seeks to assess its contribution to social science research. It examines the intellectual role of the Tavistock Institute and the tensions and pressures that the journal has faced over the past 60 years as it has sought to fulfil its mission and achieve its academic goals.
The article identifies constraints that limit employee participation in management's decision-making process. Determinants of participation include: the propensity to participate or workers' interest in being involved in the shared decision-making activities; the organization's participation potential which relates to structural and situational factors; and management's acceptance of workers' participation despite the time pressures and lack of expertise among employees which could delay strategic decisions. Forms of participation on the shop floor can range from job control to joint consultation or employee ownership.
Review of Tim Congdon's Keynes, Keynesians, and Monetarism
Review of Matthew Bishop and Michael Green's Philanthrocapitalism
In 2004, a former FDA medical officer named David Ross watched news coverage of one of the highest-profile pharmaceutical controversies in recent years: Merck’s withdrawal of Vioxx, its bestselling painkiller, from the global marketplace.
At the time, David Ross railed to his wife about the actions of David Graham, an associate director of drug safety at the FDA who drew international attention for testifying before the US Senate about the FDA’s handling of Vioxx. Graham said his supervisors ignored warnings that Vioxx could lead to cardiac arrest, and asked him to change his conclusions on an internal report about Vioxx’s risks.
Today, David Graham is still at the FDA. David Ross is not. But something both have in common is a concern that FDA has not amended policies much in the wake of the Vioxx controversy, and may be, in some ways, seeking to police internal criticism more strictl
Recent controversies over the safety of drugs such as Vioxx, a painkiller manufactured by Merck, and Seroxat, an SSRI antidepressant manufactured by GlaxoSmithKline, have led to a consequence that at first seems entirely positive: they have attracted more attention to the suppression of negative clinical trial data by industry and academic researchers, helping to establish clinical trial registries where randomized controlled trials (RCTs) must be registered at their outset.
In this paper, through a focus on debates over the safety of selective serotonin reuptake inhibitor (SSRI) antidepressants such as Seroxat, I examine recent efforts to provide more public access to clinical trial data, documenting how, despite declarations of their commitments to openness, pharmaceutical companies have found ways to evade the need to publicly disclose trials. Secondly, I suggest the emphasis on securing more access has fostered the adverse effect of deflecting attention from methodological limitations within studies that are widely available. Drawing analogies to work by Bourdieu, I argue that a paradoxical consequence of the demand for more access is the tendency to solidify faith in the moral authority of RCTs, something that inadvertently strengthens the authority of those, such as regulators and industry groups, which withheld RCT data to begin with.
I suggest it is the very limitations of RCTs – their inadequacies in producing reliable evidence of clinical effects – that help to strengthen popular and scientific assumptions of their superiority as methodological tools. This point sheds light on the question of why systems widely recognized to be ineffective often assume greater authority at the very moment when people speak of their dysfunction.
Drawing on narrative interviews with psychiatrists and health analysts in Britain, the article provides an analysis of debates over the safety of SSRI antidepressants such as Prozac and Seroxat. The focus of the article is on what I describe, drawing on Foucault, Nietzsche, Niklas Luhmann and Michael Power, as a 'will to ignorance' within regulatory bureaucracies which works to circumvent a regulator's ability to carry out its explicit aims and goals. After a description of the regulatory processes that have influenced the efforts of patients and practitioners to reach conclusions on the risks and benefits of antidepressants, I conclude by suggesting that the article's analysis of the regulation of SSRIs carries theoretical insights for the study of regulation and bureaucracy in general.
In September 2004, the global pharmaceutical manufacturer Merck & Co. removed Vioxx from the US market. The company soon faced almost 30,000 lawsuits over the alleged concealment of adverse effects. Despite suspicions that the Vioxx scandal would cripple the company's profitability, Merck's shares more than doubled between 2005 and 2007. Drawing on this case, I describe how scientific uncertainty surrounding the effects of Vioxx has been legally useful for Merck executives in exonerating their culpability for failing to disclose the adverse effects of the drugs. Extrapolating from this, I suggest uncertainty is generative and performative: it creates a demand for resolutions to the ambiguity it perpetuates, often strengthening the authority of those who have advanced a position of uncertainty to begin with. Finally, I argue paying more attention to the value of 'capitalized uncertainty' helps to nuance earlier work on the manufacture of risk and uncertainty.
Philanthrocapitalism – the harnessing of pro-market strategies in order to increase returns on philanthropic investment – is starting to polarize opinions in the worlds of philanthropy, global health and development. With the term coined in just 2006, “philanthrocapitalism” has yet to attract significant attention among social scientists. In this paper, drawing on interviews with policy experts at places such as WHO, the World Economic Forum, and Médicins san Frontières, I examine the emergence of philanthrocapitalism, comparing it to parallel trends aimed at alleviating poverty in developing regions, such as the promotion of social investment and the rise of social entrepreneurship. I then explore the influence of these related, but distinct, phenomena on the politics of global health governance, with a particular focus on concerns with the growing dominance of the Gates Foundation on agenda-setting and advocacy efforts. Finally, I draw parallels between the new philanthropy and more traditional forms of charitable investment practiced during the 20th-century. Theorists such as Bourdieu, building on Mauss’ work, have long suggested there is no such thing as a free gift: all gifts are offered with either the assumption of eventual reciprocity, or with an eye to generating greater social prestige. This insight parallels criticisms from scholars who, pointing to Gramsci’s conception of hegemony as “intellectual and moral leadership,” argue that philanthropic projects aimed at improving social conditions or providing educational opportunities are rooted in ensuring the viability of US and European economic and foreign policy interests. I suggest the growth of philanthrocapitalism raises novel concerns, which both legitimate and surpass questions posed by earlier scholars such as Gramsci, Mauss and Bourdieu of older forms of philanthropy.
The economy is back at the centre of sociological analysis. This, of course, only means that it has recaptured the position it once held in the works of the sociological ‘founding fathers’, Simmel, Pareto, Weber, Marx, and Durkheim.The so-called ‘new economic sociology’ (NES) is a field that grew out of studies made by US sociologists who
essentially used three perspectives: cultural sociology, organizational sociology, and structural network sociology (Swedberg 1997). In addition, one could mention the political economic perspective and Bourdieu’s (2005) work. These last two, however, have been less important for the
development of NES. More recently the idea of performativity has come into vogue (cf. Swedberg 2004). In this essay I review three books representing different economic-sociological perspectives drawn from the authors above:
performativity, cultural sociology, and structuralism. The books, taken together, show the progress of the field, but they also point to its problems and shortcomings. They focus on the most central mechanism of the economy, markets, which for a long time have been the main issue of NES. The issue of the quality of the products found in markets is another common theme in the works reviewed. Finally I will discuss several major problems in today’s economic sociology, and suggest some strategies to improve the situation.
Drawing on an analysis of Irving Kirsch and colleagues’ controversial 2008 article in PLoS [Public Library of Science] Medicine on the efficacy of SSRI antidepressant drugs such as Prozac, I examine flaws within the methodologies of randomized controlled trials (RCTs) that have made it difficult for regulators, clinicians and patients to determine the therapeutic value of this class of drug. I then argue, drawing analogies to work by Pierre Bourdieu and Michael Power, that it is the very limitations of RCTs — their inadequacies in producing reliable evidence of clinical effects — that help to strengthen assumptions of their superiority as methodological tools. Finally, I suggest that the case of RCTs helps to explore the question of why failure is often useful in consolidating the authority of those who have presided over that failure, and why systems widely recognized to be ineffective tend to assume greater authority at the very moment when people speak of their malfunction.
As a result of the growth of evidence-based practices across the world, health-care providers and policymakers in the United States, United Kingdom, and Europe have established institutes such as the United Kingdom’s National Institute for Health and Clinical Excellence (NICE) to produce clinical guidelines that physicians are asked to use in their daily practice. This article discusses how the withholding of clinical trial information by pharmaceutical companies and academic researchers affects the reliability of clinical guidelines. It first offers a case study analysis of the U.K. drug regulator’s failure to prosecute GlaxoSmithKline, manufacturer of the bestselling antidepressant Seroxat (manufactured as Paxil in North America), for withholding information on the safety of Seroxat from regulators. It next examines the idea of a “Sarbanes- Oxley for Science,” a recent proposal that seeks to introduce legislation forcing companies to disclose clinical trials that have indeterminate or negative results. Legislation such as Sarbanes-Oxley for Science would solve some problems with the withholding of data, but not all. Until practitioners and policymakers address the political and legal barriers preventing full access to clinical trial data for all medical treatments, the ideals of evidence-based practice will remain elusive.
From the subprime crisis, to the war on terror; from the politics of climate change to the rise of ‘evidence’ based medicine, this workshop will examine whether it is ignorance, and not knowledge, that serves as the main bulwark of economic, political and financial strength. In the context of global crises, whether financial, military or pandemic, is the ‘unknown’ increasingly being exploited as a strategic tool for personal and organisational survival?
Review of Naomi Klein's The Shock Doctrine: The rise of disaster capitalism
This article critically evaluates the Medicines and Healthcare products Regulatory Agency’s announcement, in March 2008, that GlaxoSmithKline would not face prosecution for deliberately withholding trial data, which revealed not only that Seroxat was ineffective at treating childhood depression but also that it increased the risk of suicidal behaviour in this patient group. The decision not to prosecute followed a four and a half year investigation and was taken on the grounds that the law at the relevant time was insufficiently clear. This article assesses the existence of significant gaps in the duty of candour which had been assumed to exist between drugs companies and the regulator, and reflects upon what this episode tells us about the robustness, or otherwise, of the UK’s regulation of medicines.
This workshop, organised by InSIS, aimed to foster an exchange between different approaches in science and technology studies and political and economic sociology about the study of heterogeneous arrangements – assemblages in which economic relations are always entangled with political, technical, ethical, social and material ones. We ask how this research can engage with the current context, in which the ‘free market’ is becoming a less secure and more openly contested category.
Nowadays, projects and project management can be found in virtually all organisations. Accordingly, preventing project failure has become increasingly critical. In light of this, attempts have been made to redefine the conceptual bases of project management and to define new avenues for research, development of the field, and practical application. So far, theory from organisation design has been under-explored with respect to understanding project organisations and their structures. This paper contributes by considering projects as organisations, having a specific structure. Based on the seminal work of Mintzberg [Mintzberg H. The structuring of organizations. Prentice-Hall, 1979] we develop a typology of project structures. Two illustrative case studies are discussed. The typology shows that contingency factors of projects need to be reflected in appropriate project structures.
By criticizing the prevailing network models in the policy science literature, we hypothesize that regional networks are not helpful for a company per se; network membership may be used for purposes unique to the dominating firm. Thus, institutional linkages can most fruitfully be analysed by looking at the level of the firm, and the role played by a firm's interpretation and shaping of its environment. For this purpose we look at the historical gestation of the corporate capabilities of Bosch, a large automotive component firm in the German state of Baden-Württemberg. Our research suggests that we discard the hypothesis that Bosch's position is heavily reliant upon a cooperative network with SMEs; instead we suggest an analysis along Chandlerian lines. The present changes in component sourcing policies of large car manufacturers will decrease the Multinational Corporations' dependency upon any particular regional network. The manufacturers' strategic decisions about sourcing policy, research linkages and location of manufacturing facilities reflect management's evaluation of the benefit of network membership. More specifically, for firms engaged in global competition, even if they have been largely nationally based in the past, global linkages have become overwhelmingly important.
This paper advances a relational sociology of organization that seeks to address concerns over how organizational action is understood and situated. The approach outlined here is one which takes ontology seriously and requires transparency and consistency of position. It aims at causal explanation over description and/or prediction and seeks to avoid pure voluntarism or structural determinism in such explanation. We advocate relational analysis that recognizes and engages with connections within and across organization and with wider contexts. We develop this argument by briefly reviewing three promising approaches: relational pragmatism, the social theorizing of Bourdieu and critical realism, highlighting their ontological foundations, some similarities and differences and surfacing some methodological issues. Our purpose is to encourage analysis that explores the connections within and between perspectives and theoretical positions. We conclude that the development of the field of organization theory will benefit from self conscious and reflexive engagement and debate both within and across our various research positions and traditions only if such debates are conducted on the basis of holistic evaluations and interpretations that recognize (and value) difference.
In this paper, the authors examine some of the implications of born-digital research
environments by discussing the emergence of data mining and the analysis of social
media platforms. With the rise of individual online activity in chat rooms, social networking
sites and micro-blogging services, new repositories for social science research
have become available in large quantities. Given the changes of scale that accompany
such research, both in terms of data mining and the communication of results, the
authors term this type of research ‘massified research’. This article argues that
while the private and commercial processing of these new massive data sets is far
from unproblematic, the use by academic practitioners poses particular challenges
with respect to established ethical protocols. These involve reconfigurations of the
external relations between researchers and participants, as well as the internal
relations that compose the identities of the participant, the researcher and that of
the data. Consequently, massified research and its outputs operate in a grey area of
undefined conduct with respect to these concerns. The authors work through the
specific case study of using Twitter’s public Application Programming Interface for
research and visualization. To conclude, this article proposes some potential best practices
to extend current procedures and guidelines for such massified research. Most
importantly, the authors develop these under the banner of ‘agile ethics’. The
authors conclude by making the counterintuitive suggestion that researchers make
themselves as vulnerable to potential data mining as the subjects who comprise
their data sets: a parity of practice.
This essay suggests ‘Innovation Journalism’ as a useful theme through which to explore the interplay of journalism in innovation ecosystems. This involves investigating how journalism plays a part in connecting innovation with public interests and how innovation processes and innovation ecosystems interact with public attention, with news media as an actor. It may also be of interest to study in which ways journalists cover innovation processes and innovation ecosystems, the incentives that drive innovation journalism and how news organizations may be organized to perform the task. We outline examples of research project topics to illustrate how this approach can inform studies of innovation, studies of journalism as practice, and possible scopes for the research theme. Going forward, we propose to identify earlier relevant scholarly research and relevant researchers that can be attributed to this emerging research theme in Innovation Journalism.
Complex-systems research is becomingly increasingly data-driven, particularly in the social and biological domains. Many of the systems from which sample data are collected feature structural heterogeneity at the mesoscopic scale (i.e. communities) and limited inter-community diffusion. Here we show that the interplay between these two features can yield a significant bias in the global characteristics inferred from the data. We present a general framework to quantify this bias, and derive an explicit corrective factor for a wide class of systems. Applying our analysis to a recent high-profile survey of conflict mortality in Iraq suggests a significant overestimate of deaths.
Networks of companies can be constructed by using return correlations. A crucial issue in this approach is to select the relevant correlations from the correlation matrix. In order to study this problem, we start from an empty graph with no edges where the vertices correspond to stocks. Then, one by one, we insert edges between the vertices according to the rank of their correlation strength, resulting in a network called asset graph. We study its properties, such as topologically different growth types, number and size of clusters and clustering coefficient. These properties, calculated from empirical data, are compared against those of a random graph. The growth of the graph can be classified according to the topological role of the newly inserted edge. We find that the type of growth which is responsible for creating cycles in the graph sets in much earlier for the empirical asset graph than for the random graph, and thus reflects the high degree of networking present in the market. We also find the number of clusters in the random graph to be one order of magnitude higher than for the asset graph. At a critical threshold, the random graph undergoes a radical change in topology related to percolation transition and forms a single giant cluster, a phenomenon which is not observed for the asset graph. Differences in mean clustering coefficient lead us to conclude that most information is contained roughly within 10% of the edges.
Tick size is an important aspect of the micro-structural level organization of financial
markets. It is the smallest institutionally allowed price increment, has a direct bearing
on the bid–ask spread, influences the strategy of trading order placement in electronic
markets, affects the price formation mechanism, and appears to be related to the long-term
memory of volatility clustering. In this paper we investigate the impact of tick size on stock
returns.Westart with a simple simulation to demonstrate how continuous returns become
distorted after confining the price to a discrete grid governed by the tick size.Wethen move
on to a novel experimental set-up that combines decimalization pilot programs and crosslisted
stocks in New York and Toronto. This allows us to observe a set of stocks traded
simultaneously under two different ticks while holding all security-specific characteristics
fixed. We then study the normality of the return distributions and carry out fits to the
chosen distribution models. Our empirical findings are somewhat mixed and in some cases
appear to challenge the simulation results.
This paper surveys the history of an alternative view of value creation to that associated with industrial production. It argues that technical breakthroughs and social innovations in actual value creation render the alternative - a value co-production framework - ever more pertinent. The paper examines some of the implications of adopting this framework to describe and understand business opportunity, management, and organizational practices. In the process, it reviews the research opportunities a value co-production framework opens up. Copyright © 1999 John Wiley & Sons, Ltd.
This paper proposes that responsibility is better understood as an aesthetic rather than as an ethical construct. It suggests that aesthetics is a distinctively European way of conceptualizing ‘cooperation and responsible management’, but nevertheless finds conditions that will internationalize this conceptualization soon. It also maintains that it is how responsibility is secured that determines whether cooperation is seen as appealing or repulsive. Finding in aesthetics a measure to determine whether managers enable cooperation responsibly, it argues that the aesthetics of managing cooperation responsibly can no longer be ignored.European Management Review (2005) 2, 28–35, advance online publication, 27 May 2005 doi:10.1057/palgrave.emr.1500028
The author looks at diverse concepts and roles of trust in the challenge of decarbonising energy systems, drawing on 25 years of personal experience in the fields of energy and environmental policy research. The paper focuses on three issues-public trust in science, institutional trust in making technology choices, and the idea that high-trust societies are more sustainable than those exhibiting low-trust. While trust is a key concept in understanding the public acceptability of technology choices, it is only one of a suite of interrelated concepts that must be addressed, which also includes liability, consent, and fairness. Furthermore, rational distrust among competing institutional world views may be critical in understanding the role of social capital in socioeconomic and technological development. Thus the concept of trust has become a portmanteau, carrying a diverse range of ideas and conditions for sustainable energy systems. The paper concludes with three emphases for decision makers. First, the issue is the energy system, not particular generating technologies. Second, the energy system must be recognized to be as much a social system as it is a technical one. Third, the system requires incorporation of the minimum level of diversity of engineering technologies and social actors to be sustainable.
This is four-part work providing an international view of climate change which is designed to complement the Intergovernmental Panel on Climate Change Second Assessment report. The complete work is a benchmark document summarising current understanding of of the contributions of the social sciences to the interdisciplinary issues of global climate change. It brings together widely scattered information and highlights both current research strengths and key areas for further research. The books survey the state of the art of the social sciences with regard to global climate change research; recognise global climate change research as policy relevant; review what is currently known, uncertain, and unknown in the social science areas relevant to global change; assemble and summarise findings from the international research community; report these findings within behavioural and interpretive frameworks as appropriate; and assemble this information to enlighten the future formulation and conduct of policy-relevant scientific research. The volumes in this four-part work cover resources and technology (Volume 2); tools for policy analysis (Volume 3); and, in Volume 1, begin with the societal framework. Volume 4 is presented as a readable summary for non-professionals. The first chapter of Volume 4 comprises the introductory section of each of the three more specialist volumes.
We face a problem of anthropogenic climate change, but the Kyoto Protocol of 1997 has failed to tackle it. A child of summits, it was doomed from the beginning, because of the way that it came into being, Kyoto has given only an illusion of action. It has become the sole focus of our efforts, and, as a result, we have wasted fifteen years.
We have called this essay “The Wrong Trousers” evoking the Oscar-winning animated film of that name. In that film, the hapless hero, Wallace, becomes trapped in a pair of automated ‘Techno Trousers’. Whereas he thought they would make his life easier, in fact, they take control and carry him off in directions he does not wish to go.
We evoke this image to suggest how the Kyoto Protocol has also marched us involuntarily to unintended and unwelcome places. Just as the enticingly electro-mechanical “Techno Trousers” offered the prospect of hugely increasing the wearer’s power and stride, so successful international treaties leverage the power of signatory states in a similar way, making possible together what cannot be achieved alone. The Kyoto Wrong Trousers have done something similar to those who fashioned and subscribed to the agreement. To set a new course, we need to understand how we have gone wrong so far. Accordingly, the essay proceeds in three sections, as follows:
I. Kyoto: From Treaty to Creed
Recognition is growing of the many and serious shortcomings of the Kyoto Protocol and these are explained in this section. Some are technical; but others come because Kyoto has become a surrogate for other fights, as well as a dogma. Before the next meeting in Bali, Indonesia, locks down the post-2012 phase of climate change policy, there is a slim window of opportunity to implement a more productive approach.
II. Why Did the Kyoto Protocol Fail?
The Kyoto Protocol was doomed from the beginning because it was modelled on plausible but inappropriate precedents. We explain the failure of the Kyoto Protocol and discover what we can learn from its history in order to better design future policy.
We can discard the usual reasons given for the failure of the Kyoto protocol: that there is no problem of climate change; that certain key states have not signed up; or that political will was lacking. As the IPCC shows, there is a problem. Certain states, notably the USA and Australia, may have refused to sign up, but Kyoto has failed even in Europe and Japan, both of which enthusiastically adopted it and have paid huge sums to meet targets via “carbon offset” credits. There is plenty of political will, but it is driving a defective political process.
The Kyoto Protocol failed because it is the wrong type of instrument (a universal intergovernmental treaty) relying too heavily on the wrong agents exercising the wrong sort of power to create, from the top down, a carbon market. It relies on establishing a global market by government fiat, which has never been done successfully for any commodity. Such fabricated markets invite sharp and corrupt practices–and these are now occurring on a large scale in the European Emissions Trading Scheme and through Kyoto Clean Development Mechanism scams such as HFC combustion. This accounts for two-thirds of all CDM payments to 2012. On false premises, it dodged increasing challenges that result from industrialisation in China and India, in particular the growing use of coal in both countries.
Kyoto was constructed by quick borrowing from past practice with other treaty regimes dealing with ozone, sulphur emissions and nuclear bombs which, while superficially plausible, are not applicable in the ways that the drafters assumed because these were “tame” problems (complicated, but with defined and achievable end-states), whereas climate change is “wicked” (comprising open, complex and imperfectly understood systems). Technical knowledge was taken as sufficient basis from which to derive Kyoto’s policy, whereas “wicked” problems demand profound understanding of their integration in social systems, and their ongoing development.
The presentation of Kyoto as the only course of action has raised the political price of admitting its defects, not least because it would mean admitting that the non-signatories may have been right in practice, whatever their motives. Its advocates invested emotional as well as political capital in the process, making it difficult to contemplate the idea that it is fatally flawed. Its narrow focus on mitigating the emission of greenhouse gases (in which it has failed) has created a taboo on discussing other approaches, in particular, adaptation to climate change. Failure to adapt will cost the poor and vulnerable the most.
For the past fifteen years, it has given the concerned public an illusion of effective action, tranquillising political concern. This has been, perhaps, its most damaging legacy.
III. The Right Trousers
The final section sets down the principles that should underpin a viable engagement with climate security. In it, we take a radically different approach from the top-down command and regulatory regime of output targets that is Kyoto. Our approach is both older and simpler. It sets out to harness enlightened self-interest to drive a process designed to generate a range of possible solutions, which can be compared and assessed, mixed and matched, changed and refined as we pursue the goal of climate security.
In this essay, the reader will not find a detailed critique of the Kyoto mechanisms. Nor will the reader find a proposal for a different single solution in place of Kyoto. We have refrained from this because climate change is not a discrete problem amenable to any single shot solution, be it Kyoto or any other. Climate change is the result of a particular development path and its globally interlaced supply system of fossil energy. No single intervention can change such a complex nexus (although as the earlier sections have shown, the attempt to do so has produced unintended and unwelcome effects). There is no simple silver bullet.
Instead, we suggest that in cases like this, the best line of attack is not head-on. We suggest that the policy response to climate change should assemble instead a portfolio of approaches—silver buckshot, rather than silver bullet—that would move us in the right direction, even though it is impossible to predict which of these approaches might stimulate the necessary fundamental change. This is a process of social learning in which we must be always alert to maintain our trajectory towards the goal by constant course corrections and improvements which, by definition, cannot be prescribed precisely beforehand.
In the third section we elaborate the following seven basic principles of such a radically re-thought approach: 1. Use silver buckshot; 2. Abandon universalism; 3. Devise trading schemes from the bottom up; 4. Deal with problems at the lowest possible levels of decision-making; 5. Invest in technology R&D; 6. Increase spending on adaptation; 7. Understand that successful climate policy does not necessarily focus instrumentally on the climate.
Throughout we emphasise the urgency of re-framing climate policy in this way because whereas today there is strong public support for climate action, continued policy failure on the Kyoto principles spun as a story of success could lead to public withdrawal of trust and consent for action, whatever form it takes.
In theoretical ecology, simple stochastic models that satisfy two basic conditions about the distribution of niche values and feeding ranges have proved successful in reproducing the overall structural properties of real food webs, using species richness and connectance as the only input parameters1, 2, 3, 4. Recently, more detailed models have incorporated higher levels of constraint in order to reproduce the actual links observed in real food webs5, 6. Here, building on previous stochastic models of consumer–resource interactions between species1, 2, 3, we propose a highly parsimonious model that can reproduce the overall bipartite structure of cooperative partner–partner interactions, as exemplified by plant–animal mutualistic networks7. Our stochastic model of bipartite cooperation uses simple specialization and interaction rules, and only requires three empirical input parameters. We test the bipartite cooperation model on ten large pollination data sets that have been compiled in the literature, and find that it successfully replicates the degree distribution, nestedness and modularity of the empirical networks. These properties are regarded as key to understanding cooperation in mutualistic networks8, 9, 10. We also apply our model to an extensive data set of two classes of company engaged in joint production in the garment industry. Using the same metrics, we find that the network of manufacturer–contractor interactions exhibits similar structural patterns to plant–animal pollination networks. This surprising correspondence between ecological and organizational networks suggests that the simple rules of cooperation that generate bipartite networks may be generic, and could prove relevant in many different domains, ranging from biological systems to human society11, 12, 13, 14.
The authors motivate social capital arguments at the world-system level through the analysis of world-trade flows and nation status, 1965 to 1980, with specific attention to contextual changes in global trade and stratified effects on participation in trade within it. They generate measures of structural autonomy based on world-trade data from the United Nations Commodity Trade Statistics Index and incorporate these measures into robust regression models of the determinants of nation status. The authors find support for the overall positive effects of structural autonomy on nation status in 1965 and 1970 but find that these effects dissipate by 1980. They then use quantile regressions to find that only high-status countries experience significant returns on structural autonomy in any of the 3 observation years. The authors combine network and institutional perspectives on trade to argue that changes in the context of world trade between 1965 and 1980 affect the benefits that social capital can reap and for whom.
Over the past half century, consumers in Australia have increasingly been
confronted with a plethora of health food products. This paper focuses on health
food that encourages consumption through the promise of health benefits. In this
context, media representation of such food serves as a lens to explore the spread
of consumer culture in Australia. Using a historical perspective, this paper asserts
that in promoting such foods, food “experts” form an advisory nexus in an increasing
context of “gastro-anomy” that Fischler (1980) speaks of. Fifty years of advertising,
editorial content and articles are examined from the Australian Women’s Weekly.
Warde’s (1997) antinomies of tastes are used as a starting point to show how the
anxiety and risks associated with food consumption
Over the past decade consumers in Australia and elsewhere have increasingly been confronted with a fast growing number of health food products. This profusion of health foods is accompanied by a proliferation in popular culture of professional nutritional advice on what is 'good to eat'. The genre of lifestyle magazines is one popular medium via which healthy practices and health foods are frequently reported. In this paper we use a visual discourse analysis of food-related editorial and advertorial content sourced from the long running and popular Australian Women's Weekly to investigate how lifestyle magazines have been one important locus for constituting health conscious consumers. Taking up a Foucauldian governmentality perspective we trace how this active, responsible conceptualization of the consumer, which we refer to 'healty food consumer', has increased in prevalence in the pages of Australian Women's Weekly over time. Based on our analysis we suggest that the editorial and advertorial content offers models of conduct to individuals about what possible preventative activities in which to engage, and plays an important role in shaping how we think about taking care of our health through eating.
The social construction of health food over half a century in magazine advertising in the Australian context is examined using documentary evidence and a socio-historical perspective. This paper explores the idea that health foods have over the years been socially constructed by multiple institutional players. Using a socio-historical analysis of one popular magazine, namely The Australian Women’s Weekly and advertising (as the overt manifestation of market forces) specifically we try to identify possible actors.
Neuroscience is increasingly considered a possible basis for new business and management practices. A prominent example of this trend is neuromarketing – a relatively new form of market and consumer research that applies neuroscience to marketing by employing brain imaging or measurement technology to anticipate consumers’ response to, for instance, products, packaging or advertising. In this paper, we draw attention to the ways in which certain neuromarketing technologies simultaneously reveal and enact a particular version of the consumer. The revelation is ironic in the sense that it entails the construction of a contrast between what appears to be the case – consumers’ accounts of why they prefer certain products over others – and what can be shown to be the case as a result of the application of the technology – the hidden or concealed truth. This contrast structure characterises much of the academic and popular literature on neuromarketing, and helps explain the distribution of accountability relations associated with assessments of its effectiveness.
Archaeology abounds in visual media, both media artifacts from the past, as well as means of documenting and studying those artifacts. Classic and long-established approaches to visual media include iconography and iconology (semantics and the identification of visual content), semiotics (the systems and structures of communication, signification and meaning), as well as graphics, cartography, planning and charting (communicative efficacy, the geometry of 2D to 3D translation, and information compression.
We shift emphasis in this paper away from communication, iconology, and visuality per se, the content and structure of imagery, toward the way visuality works in archaeology, from visual media as material forms (graphics, maps, photographs) to the work that visual media perform in archaeology. Along the way we present a criticism of the stress placed in much discussion of visual media on their representational qualities, that is, their fidelity to what they are taken to represent, to their mimetic qualities and their degree of correspondence to what is represented.
It is not that we consider such inquiry to be wrong, but rather that communication and meaning are often secondary functions of media. Ironically, what often matters most about visual media, we would claim, is not what they represent, but the way they fit into archaeological work on the remains of the past. In this development of McLuhan’s old adage that it is the medium that matters, we focus attention on practice and discourse, drawing particularly on the field of science and technology studies (STS). This emphasis upon the way images work is why we term our interest one in the political economy of visual media—recovering the work done by visual media in archaeology through networks of production, distribution and consumption. This leads us to identify some of the implications of new digital media, not for more spectacular summations of data about the past or photorealistic simulations, but as open fora for the co-production of pasts that matter now and for visions of future commmunity.
In many real-world networks, the rates of node and link addition are time dependent. This observation motivates the definition of accelerating networks. There has been relatively little investigation of accelerating networks and previous efforts at analyzing their degree distributions have employed mean-field techniques. By contrast, we show that it is possible to apply a master-equation approach to such network development. We provide full time-dependent expressions for the evolution of the degree distributions for the canonical situations of random and preferential attachment in networks undergoing constant acceleration. These results are in excellent agreement with results obtained from simulations. We note that a growing nonequilibrium network undergoing constant acceleration with random attachment is equivalent to a classical random graph, bridging the gap between nonequilibrium and classical equilibrium networks.
This paper reviews the literature on small-world networks in social science and management. This relatively new area of research represents an unusual level of cross-disciplinary research within social science and between social science and the physical sciences. We review the findings of this emerging area with an eye to describing the underlying theory of small worlds, the technical apparatus, promising facts, and unsettled issues for future research.
In their introduction of 'Debating Rationality: Non-rational Aspects of Organizational Decision Making,' editors Jennifer Halpern and Robert Stern trace the social science foundations of rationality and its research uses, stressing conceptions and applications from multiple research traditions. Shapira provides a pointed summary of the foundations of rationality used in the behavioral decision theory literature. Gibbons presents a similar review from the perspective of the economics of internal organization and its dialogue with organization theory. The three chapters work particularly well together to outline the terms of the debate. In the seven chapters that follow, authors whose work has helped shape contemporary research bring their expertise to the business of studying decisions and organizations. Rather than discarding the economic rationality model, the authors either advocate its revision or identify some of its inconsistencies.
In 1987, just a year before his death, the British epidemiologist Archie Cochrane gave one of his last public interviews. Cochrane had led an illustrious career, helping to establish the fields of epidemiology and public health in Britain and internationally.
This paper develops the argument that institutional mechanisms support changes in organizational strategies in ways that contrast with the standard interpretation of institutional “iron cages” that pressure organizations to conform. We specify three institutional process mechanisms that support organizational change” dominant logic-consistent activity, external charters, and peer emulation” and we test these claims with longitudinal data on the emerging strategies in early U.S. intercollegiate athletics. We argue that the supporting institutional mechanisms affect the incorporation patterns of intercollegiate programs in basketball, ice hockey, and lacrosse over the period from the late nineteenth century to the present. The research strategy of examining the spread of three different sports programs, each a proxy for different strategies of resources and visibility, provides evidence on the comparative pattern of effects of the three institutional mechanisms. Results indicate that all three institutional support mechanisms affect the incorporation of the intercollegiate programs. Differences in the pattern of incorporation across the three strategies provide robust evidence for alternatives to a prevailing “iron cage” view of institutional pressures and constraints. These findings also reinforce the importance of specifying field-level mechanisms to supplement a focus on organization-level mechanisms.
The philosopher of art Roger Scruton has claimed that photographic images are not representations, on the basis of the role of causal rather than intentional processes in arriving at the content of a photographic image (Scruton, 1981). His claim was controversial at the time, and still is, but had the merit of being a springboard for asking important questions about what kinds of representation result from the technologies used in depicting and visualising. In the context of computational picturing of different kinds, in imaging and other forms of visualisation, the question arises again, but this time in an even more interesting form, since these techniques are often hybrids of different principles and techniques. A digital image results from a complex interrelationship of physical, mathematical and technological principles, embedded within human and social situations. This paper consists of three sections, each presenting a view of the question whether digital imaging and digital visual artefacts generally are representations, from a different perspective. These perspectives are not representative, but aim only to accomplish what Scruton’s paper did succeed in accomplishing, that is, being a provocation and a springboard for a broader discussion.
Angela Wilkinson explains how scenario planning can help anticipate different energy futures.
Graham Molitor's article provides a timely prompt for reflecting on the value of scenario practices, especially given several data sources indicating their usage has increased significantly since 2001 (e.g. Ramirez, Selsky, & van der Heijden, 2008, p.9. Molitor is not alone in his struggle to clarify the effectiveness of scenario practices. Others, including myself, are endeavouring to address similar questions: how to judge effectiveness and what do we mean by effectiveness' when referring to such practices? As he implicitly suggests, his critique does not imply that we should throw the scenario 'baby out with the bathwater'. It is all too easy to agree with some of the criticisms of scenarios raised by Molitor. Three aspects are particularly relevant: The first is that futures work seems to be characterised by highly personalised practices. Such practices can be introduced by someone who thought it was "a good idea" but who failed to fully reflect on the complexity of the situation and bases their choice of techniques on sound theoretical principles. Secondly, as much of scenario work is secret – particularly in military and corporate sectors- and/or difficult to assess, it is very hard to engage in comparative research. Thirdly, common to other practitioner-led fields, scenario practices are blessed with a high degree of innovation and entrepreneurship and cursed by a lack of reliable accounts that render explicitly what has worked and what has not, why and for whom in different settings.
A new approach to scenarios focused on environmental concerns, changes and challenges, i.e. so-called 'environmental scenarios', is necessary if global environmental changes are to be more effectively appreciated and addressed through sustained and collaborative action.
On the basis of a comparison of previous approaches to global environmental scenarios and a review of existing scenario typologies, we propose a new scenario typology to help guide scenario-based interventions. This typology makes explicit the types of and/or the approaches to knowledge ('the epistemologies') which underpin a scenario approach.
Drawing on previous environmental scenario projects, we distinguish and describe two main types in this new typology: 'problem-focused' and 'actor-centric'. This leads in turn to our suggestion for a third type, which we call 'RIMA'—'reflexive interventionist or multi-agent based'. This approach to scenarios emphasizes the importance of the involvement of different epistemologies in a scenario-based process of action learning in the public interest. We suggest that, by combining the epistemologies apparent in the previous two types, this approach can create a more effective bridge between longer-term thinking and more immediate actions. Our description is aimed at scenario practitioners in general, as well as those who work with (environmental) scenarios that address global challenges.
Greek translation of 'Science: The Very Idea'.
Recently I presented, with others, a general statement of the sequence of social and intellectual processes which characterize the emergence, growth and final decline of specific areas of scientific endeavour. A central concern of my own research has been to examine the extent to which scientific activity in one particular area, that associated with research into the pulsar phenomenon, corresponds to the sequence of processes described in the theoretical statement. An obvious preliminary objective of my research was to write an outline history of the intellectual development of the pulsar area. Although many of the methodological problems relating to the investigation of the social development of scientific specialties have already been examined it is less widely realized that methodological problems of equal difficulty occur in the analysis of the intellectual development of specialties, even though much of the basic data can be obtained without intervention in the on-going social process. In this paper I report my
attempt to describe the intellectual history of pulsar astronomy. When I began this part of my study I was unaware of any special methodological problems. As I progressed, I not only became acutely aware that there were such problems, but I also realized they would prevent me from carrying out
my original intention of providing a straightforward chronological history of this particular intellectual
It is a routine teaching day. The advanced level course in science and technology studies (S&TS) is holding its fourth weekly class of the semester. The students dutifully indulge the professor in his incantation of one of the iconic case studies of the ﬁeld: Langdon Winner’s well-known
analysis of Moses’ bridges. Winner claims that the bridges built by Moses on Long Island are an example of a technology which has political qualities: by this he means that the bridges were designed (consciously or unconsciously) to have a particular social effect.
There is in science and technology studies a perceptible new interest in matters of ‘ontology’. Until recently, the term ‘ontology’ had been sparingly used in the field. Now it appears to have acquired a new theoretical significance and lies at the centre of many programmes of empirical investigation. The special issue to which this essay is a contribution gathers a series of enquiries into the ontological and reflects, collectively, on the value of the analytical and methodological sensibilities that underpin this new approach to the make-up of the world. To what extent and in what sense can we speak of a ‘turn to ontology’ in science and technology studies? What should we make of, and with, this renewed interest in matters of ontology? This essay offers some preliminary responses to these questions. First, we examine claims of a shift from epistemology to ontology and explore in particular the implications of the notion of ‘enactment’. This leads to a consideration of the normative implications of approaches that bring ‘ontological politics’ to centre stage. We then illustrate and pursue these questions by using an example – the case of the ‘wrong bin bag’. We conclude with a tentative assessment of the prospects for ontologically sensitive science and technology studies.
Over the past decade, researchers have become increasingly interested in the theoretical and practical issue of governance as it relates to information and communication technologies. However, while the field has grown with the proliferation and use of such technologies, its scope and focus are far from clear: what counts as governance in settings, in which people increasingly interact through networked digital media? How can we think about interaction, coordination and control in these environments? What is the role of technologies in creating and maintaining regimes of governance? And what methodologies and methods are appropriate for understanding them? This paper draws on an interdisciplinary workshop held at Oxford University to have a closer look at some of these issues. It suggests that a key to understanding the heterogeneity of workshop contributions is to attend to the performativity of governance and governance research, the analytic status of ‘technology’ and the conceptual and methodological devices we use to research