|Up a level|
We present a framework for solving the strategic problem of assigning retailers to facilities in a multi-period single-sourcing product environment under uncertainty in the demand from the retailers and the cost of production, inventory holding, backlogging and distribution of the product. By considering a splitting variable mathematical representation of the Deterministic Equivalent Model, we specialize the so-called Branch-and-Fix Coordination algorithmic framework. It exploits the structure of the model and, specifically, the non-anticipativity constraints for the assignment variables. The algorithm uses the Twin Node Family (TNF) concept. Our procedure is specifically designed for coordinating the selection of the branching TNF and the branching S3 set, such that the non-anticipativity constraints are satisfied. Some computational experience is reported.
In this paper, we develop a dynamic model of institutional share dumping surrounding control events. Uninformed institutional investors dump shares, despite trading losses, in order to manipulate share prices and trigger activism by activist "relationship" investors. Nonactivist institutional investors, who have private information regarding both firm value and the quality of their own information, are motivated to trade not only by trading profits but also by a desire to protect the value of their inventory and to disguise the quality of the information underlying their trades. Relationship investors, who use price and volume information to identify target firms, profit both from improving firm performance and from their private information about their own targeting activity. In addition to explicating recent empirical results on the relationship between institutional investor trading and corporate control events, the paper provides a number of new insights into the interaction between market microstructure and corporate governance, including predictions regarding the effect of share volume on subsequent governance activity, the relationship between the trading patterns of activist and nonactivist strategic investors, the effect of order flow on subsequent shareholder activism, and the effect of institutions' portfolio positions on the informativeness of their trading activity.
A prominent feature of the restructuring of work in the public services has been the growing importance of assistant roles. This article examines the regulation of teaching assistant (TA) roles in 10 primary schools. It examines entry into TA roles, the structure of TA roles and the consequences of TA roles for teachers and assistants. The article develops a series of arguments to explain the variations in the TA workforce between authorities and schools.
One of the more interesting counter-intuitive findings in organizational research is that success breeds failure. This counter-intuitive has been described in terms of core rigidities, core incompetencies, and even the Icarus Paradox. The literature on these topics has concluded that success yields overconfidence and myopia in firms and their managers, and this eventually causes failure. We augment this literature by suggesting that success breeds not only internal pathologies that cause firms to misuse their established resources over time, but also external pathologies that cause firms to lose access to new resources. In particular, success influences stakeholders’ perceptions of firms, causing firms to lose the benefits of underdog status and gain the problems of overlord status. We term this notion that success warps images of the successful, leading to their decline over time, the Helios Paradox, and suggest that dominant firms must counter natural tendencies to succumb to both the Icarus and Helios Paradoxes if they are to remain successful over time.
This paper presents a dynamic framework that describes how firms allocate limited resources between improving their competitive position relative to rivals and their communal position shared with rivals. This dynamic framework outlines how organizational field-level dynamics influence industry attractiveness and thereby alter a firm's incentive to engage in communal strategy relative to competitive strategy. Communal strategy, in turn, can influence the institutions governing an organizational field and thereby shape industry attractiveness. Overall, the interplay between factors exogenous and endogenous to an industry cause change in an organizational field and so determine the nature of the communal environment shared by a firm and its rivals over time. Analysis of this interplay provides insight into the micro-level drivers of macro-level change and furthers understanding of the conditions under which rivalrous firms voluntarily contribute to collective betterment of their industry despite collective rationality.
We extend theories of self-regulation of physical commons to analyze self-regulation of intangible commons in modern industry. We posit that when the action of one firm can cause spillover harm to others, firms share a type of commons. We theorize that the need to protect this commons can motivate the formation of a self-regulatory institution. Using data from the US chemical industry, we find that spillover harm from industrial accidents increased after a major industry crisis and decreased following the formation of a new institution. Additionally, our findings suggest that the institution lessened spillovers from participants to the broader industry.
The article reviews the book "The Keystone Advantage: What the New Dynamics of Business Ecosystems Mean for Strategy, Innovation and Sustainability," by Marco Iansiti and Roy Levien.
In this paper, I seek to build a theoretical framework that explains how effectively different firms can use different types of corporate social responsibility (CSR)to influence stakeholders perceptions of and reactions to different types of errors. CSR affects the errors stakeholders notice, how they frame them, how they respond to them, and how quickly any punishment wanes. Ex ante and ex post CSR decrease the likelihood that stakeholders will notice some errors, improve the framing of those errors that are noticed, and decrease the magnitude and duration of stakeholder attacks sparked by those errors.
Firms pursue competitive advantage through both individual and collective strategic actions. Because of the difficulties of coordinating collective action, industries are characterized by extended periods of individual activity, punctuated by waves of collective activity. Rational and self-interested firms engage in individual activities unless disrupted by a force ample to overcome the collective action problem. At key points in the life of an industry, legitimacy challenges arise, presenting incentives to collectivize. In a legitimacy challenge, mobilized groups of constituents attempt to gain control and change the institutional rules of the game. In order to regain control, firms gradually collectivize in a pattern akin to the resource mobilization perspective of social movement theory. The author builds a model and offers several testable propositions that trace the dynamic working balance between individual and collective activities within an industry during the emergence, maturity and decline stages. Over the life of an industry, these ebbs and flows in collective activity take the form of waves of collectivizing.
While interest in the concept of corporate reputation has gained momentum in the last few years, a precise and commonly agreed upon definition is still lacking. This paper reviews the many definitions of corporate reputation present in the recent literature and categorizes these definitions based on their similarities and differences. The purpose of the study is to review, analyze and evaluate prior definitional statements of corporate reputation. The analysis led us to conclude that the cluster of meaning that looks most promising for future definitional work uses the language of assessment and specific terms such as judgment, estimation, evaluation or gauge. Based on this review work and a lexicological analysis of the concept of reputation, we propose a new definitional statement that we think adds theoretical clarity to this area of study. The statement defines corporate reputation more explicitly and narrowly and distinguishes this concept from corporate identity, corporate image and corporate reputation capital. It is our hope that this study and the resulting definition will provoke further scholarship devoted to developing one voice when it comes to corporate reputation as a concept.
A central and contentious debate in many literatures concerns the relationship between financial and social performance. We advance this debate by measuring the financial–social performance link mutual funds that practice socially responsible investing (SRI). SRI fund managers have an array of social screening strategies from which to choose. Prior studies have not addressed this heterogeneity within SRI funds. Combining modern portfolio and stakeholder theories, we hypothesize that the financial loss borne by an SRI fund due to poor diversification is offset as social screening intensifies because better-managed and more stable firms are selected into its portfolio. We find support for this hypothesis through an empirical test on a panel of 61 SRI funds from 1972 to 2000. The results show that as the number of social screens used by an SRI fund increases, financial returns decline at first, but then rebound as the number of screens reaches a maximum. That is, we find a curvilinear relationship, suggesting that two long-competing viewpoints may be complementary. Furthermore, we find that financial performance varies with the types of social screens used. Community relations screening increased financial performance, but environmental and labor relations screening decreased financial performance. Based on our results, we suggest that literatures addressing the link between financial and social performance move toward in-depth examination of the merits of different social screening strategies, and away from the continuing debate on the financial merits of either being socially responsible or not.
This article explores the challenge of merging brands successfully following corporate mergers. It develops a framework for corporate and product branding strategies, as well as for creating the appropriate brand identity among the merged firm's target consumers. A critical key to successful brand mergers is to align the architecture of the merged brand portfolio to brand strategy and identity.
Families are more likely to save if they can commit to savings before funds are in-hand (and subject to spending temptations). For low- and moderate-income U.S. families, an important savings opportunity arises annually, during income tax season. We study a group of low-income individuals in Tulsa, Oklahoma, who were encouraged to save parts of their federal refunds at the time of tax filing. Those who agreed to save directed a portion of their refund to a savings account and arranged to have the rest sent to them in the form of a check. Eligible individuals could also open low-cost savings accounts. We document the demand for these services, the characteristics of those who sought to participate, the savings goals of those who participated, the immediate savings generated by the program, and the disposition of savings a few months after receipt. This pilot study suggests that there may be demand among low-income families for a refund-splitting program that supports emergency needs as well as asset building, especially if a basic savings product is available to all at the time of tax filing.
While it is through creating and marketing products that firms achieve success, there are also significant opportunities for them to create value by exploiting the capabilities they utilize in creating products. However, seeing capabilities in this light demands new ways of thinking about product markets and marketing policies.
This paper shows that with (partial) irreversibility higher uncertainty reduces the responsiveness of investment to demand shocks. Uncertainty increases real option values making firms more cautious when investing or disinvesting. This is confirmed both numerically for a model with a rich mix of adjustment costs, time-varying uncertainty, and aggregation over investment decisions and time and also empirically for a panel of manufacturing firms. These “cautionary effects” of uncertainty are large—going from the lower quartile to the upper quartile of the uncertainty distribution typically halves the first year investment response to demand shocks. This implies the responsiveness of firms to any given policy stimulus may be much weaker in periods of high uncertainty, such as after the 1973 oil crisis and September 11, 2001.
Interactive Evolutionary Computation (IEC) utilises humancomputer interaction as part of system optimisation and therefore constitutes an ideal platform for developing and improving systems that are subjectively influenced. Recently, interest in the usage of IEC for subjectively influenced design practice has shown an increase. In this paper the current state of the utilisation of IEC based optimisation platforms for varying design applications are reviewed. The design fields are categorized by conceptual design, industrial design, and finally, artistic design. We also present problems facing IEC and current research practice to resolve them.
We propose a sequential interactive genetic algorithm (IGA), multi-objective IGA and parallel IGA, and evaluate them with both simulated and real users. Combining human evaluation with an optimization system for engineering design enables us to embed domainspecific knowledge that is frequently hard to describe, i.e. subjective criteria, and design preferences. We introduce a new IGA technique to extend the previously introduced sequential single objective GA and multi-objective GA, viz. parallel IGA. Experimental evaluation of three algorithms with a multi-objective manufacturing plant layout design task shows that the multi-objective IGA and the parallel IGA clearly provide better results than the sequential IGA, and that the multi-objective IGA gives the most diverse results and fastest convergence to a stable set of qualitatively optimum solutions, although the parallel IGA provides the best quantitative fitness convergence.
As financial stability has gained focus in economic policymaking, the demand for analyses of financial stability and the consequences of economic policy has increased. Alternative macroeconomic models are available for policy analyses, and this paper evaluates the usefulness of some models from the perspective of financial stability. Financial stability analyses are complicated by the lack of a clear and consensus definition of ‘financial stability’, and the paper concludes that operational definitions of this term must be expected to vary across alternative models. Furthermore, since assessment of financial stability in general is based on a wide range of risk factors, one can not expect one single model to satisfactorily capture all the risk factors. Rather, a suite of models is needed. This is in particular true for the evaluation of risk factors originating and developing inside and outside the financial
The Superbus project seems a promising transport alternative for the Zuiderzee Line. However, due to the innovative character and the dynamics, uncertainty regarding the feasibility emerges. In order for successful implementation one has to successfully consider the organisation of the project, the tasks and responsibilities of the actors and the allocation of risks between parties. For the selection of an appropriate arrangement for the organisation of the Superbus project one has to consider the four phases of product implementation (initiation, design, implementation, exploitation) and one should consider the 'system' that is being under consideration (by means of the TRAIL layer scheme). This leads to three possible arrangements: a DBFMO, a DBFM+O, or a DB+F+M+O contract. There are objections against an arrangement in which all the activities are integrated (DBFMO) because there is not yet a large extent of experience on this PPP structure. In the arrangement DB+F+M+O there is a stricter separation between activities which is particularly important in innovative projects like the Superbus in which the time span between the completion of the first part and the start of the second part can be large. Both the DBFM+O and the DB+F+M+O contract are suitable arrangements and they are quite similar regarding the allocation of risks. A slight preference is given to a DB+F+M+O contract because it is considered a safer option, which is important as the Superbus is a new project including many risks already.
Focusing on the interdependence of institutional change in the payments system and monetary policy, this book examines the different channels via which payment systems affect monetary policy. It is intended for researchers and practitioners in the field of monetary economics.
This paper uses a new data-set to examine how internal capital markets and foreignownership affect investment. Our data allow us to compare investment behaviour of listedsubsidiaries with stand-alone firms while controlling for investment opportunities of parentand subsidiary firms. We evaluate how the size of ownership and the geographical proximityof majority owners to their subsidiaries affect firm investment efficiency. We find that theinvestment of subsidiaries is more sensitive to investment opportunities than that of standalonefirms and falls when investment opportunities of parent firms improve. This suggeststhat there are internal capital markets that reallocate funds towards units with betterinvestment opportunities. We find that investment allocation is most efficient where parentshave modest ownership stakes and are distant from their subsidiaries and when subsidiariesoperate in well developed financial markets. These results indicate that influence costsimposed by dominant parents may outweigh their potential informational benefits, especiallywhen subsidiaries are located in countries with weaker financial development.
This paper presents findings from research amongst European grocery retailers into their methods for measuring shrinkage. The findings indicate that: there is no dominant method for valuing or stating shrinkage; shrinkage in the supply chain is frequently overlooked; data is essential in pinpointing where and when loss occurs and that many retailers collect data at the stock-keeping unit (SKU) level and do so every 6 months. These findings reveal that it is difficult to benchmark between retailers due to inconsistencies between measurement methods and that there are opportunities for many of the retailers surveyed to improve their shrinkage measurement by adopting known good practice.
Purpose – Measures and measurement systems must reflect the context to which they are applied, requiring that the contextual issues relating to retail shrinkage must be identified as a necessary precursor when measuring shrinkage. Without considering these issues any decision on which method of shrinkage measurement to employ will be uninformed, arbitrary and at best intuitive. The objective of this paper is to scope out and summarise the contextual issues surrounding retail shrinkage in Europe's grocery sector and to offer a view on the implications of these issues to shrinkage measurement.
Design/methodology/approach – The methodology adopted was a scoping study of the key issues that influence shrinkage measurement, drawing these from prior research and exposing these findings to the informed opinion of a review panel for critique and to highlight areas for further investigation.
Findings – The findings from the study were to identify a range of contextual issues relating to shrinkage and to summarise these issues into four categories, namely: stewardship and performance improvement; cost reduction and sales improvement; local effects of systemic issues; and the detailed nature of retailing.
Practical implications – The implications of these key issues are significant to the measurement of shrinkage in terms of the scope across the business from which shrinkage needs to be considered. This finding highlights the need to consider shrinkage as a systemic issue that extends across a business from design, through planning to operational execution. It also identifies the impact of shrinkage on increasing cost and depressing sales and considers the responsibility of management teams in addressing these matters.
Originality/value – This paper is theoretically original and thus of value to the academic community. It is also of value to the practitioner community in grocery retailing where shrinkage and its measurement is of worldwide strategic importance.
We study the statistics of the optimal path in both random and scale-free networks, where weights are taken from a general distribution P(w). We find that different types of disorder lead to the same universal behavior. Specifically, we find that a single parameter (S defined as AL(-1/v) for d-dimensional lattices, and S defined as AN(-1/3) for random networks) determines the distributions of the optimal path length, including both strong and weak disorder regimes. Here v is the percolation connectivity exponent, and A depends on the percolation threshold and P(w). We show that for a uniform P(w), Poisson or Gaussian, the crossover from weak to strong does not occur, and only weak disorder exists.
Although growth has occurred in contract employment arrangements both in the public and private sectors, scant research has been conducted on the organizations and employees affected by these arrangements.This study examines the employment relationship of long-term contracted employees using a social exchange framework. Specifically, we examine the effects of employee perceptions of organizational support from contracting and client organizations on their (a) affective commitment to each organization and (b) service-oriented citizenship behavior. We also examine whether felt obligation toward each organization mediates this relationship. Our sample consists of 99 long-term contracted employees working for four contracting organizations that provide services to the public on behalf of a municipal government. Results indicate that the antecedents of affective commitment are similar for the client and contracting organization. Employee perceptions of client organizational supportiveness were positively related to felt obligation and commitment to the client organization. Client felt obligation mediated the effects of client perceived organizational support (POS) on the participation dimension of citizenship behavior. Our study provides additional support for the generalizability of social exchange processes to nontraditional employment relationships. Implications for managing long-term contracted employees are discussed.
Social science often focuses on organizations, institutions, and phenomena that have become socially or economically important. This can create a population selection bias. Even if one samples all relevant organizations, and not only successful ones, one may selectively sample populations of organizations. The effects of such a selection bias are illustrated by using the case of density dependence in organizational entry and exit rates. While several studies have found non-monotonic density dependence in entry and exit rates, most studies only examine and estimate models of vital rates in populations that have become large. This creates a population selection bias that can give rise to spurious non-monotonic density dependence. The implications of this bias are discussed for organizational ecology and more generally for the selection of the populations and empirical settings studied in social science.
This Paper provides a simple theoretical framework for analysing simultaneous vertical and horizontal competition in excise taxes, and estimates equations informed by the theory on a panel of US state and federal excise taxes on cigarettes and gasoline. We also examine the role played by smuggling. The results are generally consistent with the theory, when the characteristics of the markets for the goods are taken into account. For neither good do federal excise taxes affect state taxes. Taxes in neighbouring states have a significant and large effect in the case of cigarettes, and a much weaker effect in the case of gasoline. We also find that in the setting of cigarette taxes, concerns about cross-border shopping play a more important role than concerns about smuggling.
It was not so long ago that mangoes, papaya and snow peas evoked images of tropical climes and exotic peoples. Recently, however, the consumption of so-called luxury fruits and vegetables has elicited a different sort of imagery. Far from the lure of seductive landscapes, today’s consumer is confronted with haunting images of toxic fields, child slavery and the African poor. Such images are part of a new morality of consumption, where consumers, NGOs, trade unions and global supermarkets aspire to ‘save’ the African worker from the downside of globalization. This article explores the ways in which Kenya’s highly valuable vegetable trade has become the field on which notions of justice, economic rights and African development are played out. Based on archival research and consumer interviews, it focusses specifically on how the ethical turn of UK consumers (and the retailers’ branding of this sensibility) is rooted in an older legacy, whereby 19th-century liberal considerations of duty, morality and progress inhabited the agenda of the late colonial state. The article suggests that, in both cases, African labor is an arena in which discourses of justice are played out, as a consuming public (re)constitute the African worker as an object of their duty and obligation.
Kenya's cut flower industry is often considered a testament to globalization --- a panacea for declining exports that has brought thousands of new employment opportunities to poor rural women. The industry, however, has also come to epitomize the dark side of globalization with booming economic growth lying side by side with human immiseration. In recent years this juxtaposition has spawned a new moral discourse, with images of toxic flower fields and lurid working conditions broadcast into the living rooms of suburban London homes. Such images are part of a new morality of consumption, where consumers, NGOs and global supermarkets aspire to 'save' the African worker from the downside of globalization.
The present study reflects on the role of the middle manager in the implementation of what has become known as evidence-based health care. This movement advocates that clinical practice is continually informed by the results of robust research and evidence. In our work exploring the complexity of ensuring that practice is informed by evidence we have found that general managers have relatively little influence when compared with clinicians especially doctors. We argue that local professional groups work together in communities of practice, which are frequently uniprofessional. These boundaries affect the motivations for seeking improvement and upgrading and the way evidence and knowledge is perceived and interpreted. We argue that if the quality of health care is to be improved, we need to understand the complex historically and contextually informed interactions between different
professional groups and to design diffusion strategies that acknowledge this complexity.
We investigate the complex relationships between countries in the Eurovision Song Contest, by recasting past voting data from 1992–2003 in terms of a dynamical network. Our analysis shows that the UK is remarkably compatible, or ‘in tune’, with other European countries during the period of study. Equally surprising is our finding that some other core countries, most notably France, are significantly ‘out of tune’ with the rest of Europe during the same period. In addition, our analysis enables us to confirm a widely-held belief that there are unofficial cliques of countries; however, these cliques are not always the expected ones, nor can their existence be explained solely on the grounds of geographical proximity. The complexity in this system emerges via the group ‘self-assessment’ process, and in the absence of any central controller. One might therefore speculate that such complexity is representative of many real-world situations in which groups of ‘agents’ establish their own inter-relationships and hence ultimately decide their own fate. Possible examples include groups of individuals, societies, political groups or even governments.
A reconfigurable modular approach is proposed to construct a responsive manufacturing system to satisfy customers’ varied demand. In order to achieve this, the system employs a three-level control structure. We present the system’s dynamic behavior and optimal layout, as well as performance measures. Entropic measurement of demand and factory capacity is used to determine the local optimal strategy under dynamic scheduling. We conclude this paper using cases of successful fast-responding corporations to gain insight into
responsive reconfigurable systems.
Researchers have been analyzing network robustness by two approaches, namely network modelling and network analysis. However, there are no consensuses on the use of the network metrics. This paper studies two commonly used network metrics on the study of network robustness, namely average shortest path and diameter, and tries to develop a general principle to measure the network robustness. A metro network, namely Newcastle Metro network, is studied. This paper summarized that the network should be measured by both network disconnectedness and the pre-existing network metric such as average shortest path and diameter to investigate the impact of the loss of a link in the network.
Two representative computational models for switching between internal models have been proposed (Imamizu et al., 2004), namely a mixture-of-experts model (Gomi and Kawato, 1993; Graybiel et al., 1994; Jordan and Jacobs, 1994), and a modular selection and identification for control model (MOSAIC) (Haruno et al., 2001; Doya et al., 2002; Wolpert et al., 2003). The behaviour in the mixture-of-experts model is determined by a gating module which selects between experts, while behaviour in the MOSAIC model combines recommendations from each expert. However, these two switching models alone seem unable to capture the essence of learned skills, learning behaviour and the occurrence of errors (cf. Diedrichsen et al., 2005; Thoroughman et al., 1999).
In this poster, a novel hierarchical control architecture for switching internal models is proposed. The architecture, which is based on a combination of predictive and adaptive control, can account for the observed results on combined mixture-of-experts and MOSAIC control behaviour. Experimental trials on verbal fluency tasks are designed to tap the subjects switch costs (cf. Gurd et al., 2002). Subjects switch between producing verbal output from over-learned sequences, such as days of the week, months of the year, or letters of the alphabet (ie. Monday, January, A, Tuesday, February, B, etc.. The added times the subjects take to switch between tasks, as well as the occurrence of errors, are measured. The results show that the deteriorating performance of the subjects (ie. as the number of categories are increased, or as the time into the task progresses), seems due to the inconsistency between predictive outputs in the internal models and actual outcomes on the performed tasks.
An agent-based dynamic routing strategy for a generic automated material handling systems (AMHS) is developed. The strategy employs an agent-based paradigm in which the control points of a network of AMHS components are modelled as cooperating node agents. With the inherent features of route discovery a set of shortest and near-shortest path, an average-flow route selection algorithm is developed to scatter the load of an AMHS. Their performance is investigated through a detailed simulation study. The performance of the proposed dynamic routing strategy is benchmarked with the shortest path algorithm. The results of the simulation experiments are presented and their performance compared under a number of performance indices including the hop count, flow and ability to balance network loading.
Even structurally simple supplier–customer systems can be operationally complex. This operational complexity can be colloquially defined as the uncertainty associated with managing the dynamic variations, in time or quantity, across information and material flows at the supplier–customer interface. This paper proposes a means of measuring the information demands placed on supplier–customer systems, as a result of this uncertainty.
This paper mathematically models the operational complexity of supplier–customer systems from an information-theoretic perspective. A unique feature of this measure is that it captures, in relative terms, the expected amount of information required to describe the state of the system. The measure provides flexibility in the scope and detail of analysis, while at the same time allowing a systematic hierarchical approach.
The application of the measure allows valuable insights to be obtained in terms of the degree of uncertainty, level of control and the detail of monitoring required to manage the operational complexity of supplier–customer systems.
Although the word complexity is often used by engineers who are trying to re-design or re-organize a system, a formal definition of the term is often lacking. This paper introduces and extends the application of the concept of entropy to the quantitative analysis of complexity in the design and study of manufacturing systems. Sequence disorder complexity and routing complexity are defined in the context of manufacturing systems specifically including rework cells. Different kinds of rework cells are defined and their parameters presented. The difference between input and output sequences is studied with respect to rework cell system parameters and we introduce a quantitative complexity metric to measure the difference between alternative rework cells. The emphasis of this paper is on providing a comprehensive comparison on system performance of several structures of rework cells in terms of complexity, cost and quality. The paper concludes that both the choice of rework cells structure and system parameter selection are important to the design of rework cells.
We present a methodology for calculating the information-theoretic complexity of a mass customization (MC) manufacturing system, under various inventory management strategies, and identify the parameters of the whole system that are likely to be the most significant in determining overall system complexity. We apply the methods of the information-theoretic complexity calculation, which provides a generic method for objectively comparing different kinds of system. We present a model of the structure of a mass customizing system, as composed of a push line and a pull line, decoupled by inventory of semi-finished variants. We calculate the information-theoretic complexity of the entire system, and show that the inventory is the main source of complexity in a mass customizing system. We then focus on the impact of different strategies for managing the inventory as methods to reduce complexity. We point out the factors that affect inventory complexity most significantly are the number of stock locations and the number of variants that may be stored there, although the number of stock locations is the more significant of the two. These results can guide the designers and managers of mass customization systems on the system structures that are most likely to have less management overhead, because of the reduced amount of information that is required to know the state of the system.
The article focuses on the market efficiency and the long-memory of supply and demand. The long-memory of supply and demand implies that there are waves of buyer-initiated transactions that are highly foreseeable with the use of simple linear algorithm. The authors stressed that the total price impact can be summed up with bare propagators associated with each transaction.
In this comment we discuss the problem of reconciling the linear eﬃciency of price returns with the long-memory of supply and demand. We present new evidence that shows that eﬃciency is maintained by a liquidity imbalance that co-moves with the imbalance of buyer vs. seller initiated transactions. For example, during a period where there is an excess of buyer initiated transactions, there is also more liquidity for buy orders than sell orders, so that buy orders generate smaller and less frequent price responses than sell orders. At the moment a buy order is placed the transaction sign imbalance tends to dominate, generating a price impact. However, the liquidity imbalance rapidly increases with time, so that after a small number of time steps it cancels all the ineﬃciency caused by the transaction sign imbalance, bounding the price impact. While the view presented by Bouchaud et al. of a ﬁxed and temporary bare price impact is self-consistent and formally correct, we argue that viewing this in terms of a variable but permanent price impact provides a simpler and more
natural view. This is in the spirit of the original conjecture of Lillo and Farmer, but generalized to allow for ﬁnite time lags in the build up of the liquidity imbalance
after a transaction. We discuss the possible strategic motivations that give rise to the liquidity imbalance and oﬀer an alternative hypothesis. We also present some results that call into question the statistical signiﬁcance of large swings in expected price impact at long times.
Making links between micro and macro levels has been problematic in the social sciences, and the literature in strategic management and organization theory is no exception. The purpose of this chapter is to raise theoretical issues in developing micro-foundations for strategic management and organizational analysis. We discuss more general problems with collectivism in the social sciences by focusing on specific problems in extant organizational analysis. We introduce micro-foundations to literature by explicating the underlying theoretical foundations of the origins of individual action and interaction. We highlight opportunities for future research, specifically emphasizing the need for a rational choice programme in management research.
This article examines five common misunderstandings about case-study research: (a) theoretical knowledge is more valuable than practical knowledge; (b) one cannot generalize from a single case, therefore, the single-case study cannot contribute to scientific development; (c) the case study is most useful for generating hypotheses, whereas other methods are more suitable for hypotheses testing and theory building; (d) the case study contains a bias toward verification; and (e) it is often difficult to summarize specific case studies. This article explains and corrects these misunderstandings one by one and concludes with the Kuhnian insight that a scientific discipline without a large number of thoroughly executed case studies is a discipline without systematic production of exemplars, and a discipline without exemplars is an ineffective one. Social science may be strengthened by the execution of a greater number of good case studies.
This examines five common misunderstandings about case-study research: (a) theoretical knowledge is more valuable than practical knowledge; (b) one cannot generalize from a single case, therefore, the single-case study cannot contribute to scientific development; (c) the case study is most useful for generating hypotheses, whereas other methods are more suitable for hypotheses testing and theory building; (d) the case study contains a bias toward verification; and (e) it is often difficult to summarize specific case studies. This article explains and corrects these misunderstandings one by one and concludes with the Kuhnian insight that a scientific discipline without a large number of thoroughly executed case studies is a discipline without systematic production of exemplars, and a discipline without exemplars is an ineffective one. Social science may be strengthened by the execution of a greater number of good case studies.
A major source of risk in project management is inaccurate forecasts of project costs, demand, and other impacts. The paper presents a promising new approach to mitigating such risk based on theories of decision-making under uncertainty, which won the 2002 Nobel Prize in economics. First, the paper documents inaccuracy and risk in project management. Second, it explains inaccuracy in terms of optimism bias and strategic misrepresentation. Third, the theoretical basis is presented for a promising new method called "reference class forecasting," which achieves accuracy by basing forecasts on actual performance in a reference class of comparable projects and thereby bypassing both optimism bias and strategic misrepresentation. Fourth, the paper presents the first instance of practical reference class forecasting, which concerns cost forecasts for large transportation infrastructure projects. Finally, potentials for and barriers to reference class forecasting are assessed.
If we want to empower and re-enchant organization research, we need to do three things. First, we must drop all pretence, however indirect, at emulating the success of the natural sciences in producing cumulative and predictive theory, for their approach simply does not work in organization research or any of the social sciences (for the full argument, see Flyvbjerg 2001). Second, we must address problems that matter to groups in the local, national, and global communities in which we live, and we must do it in ways that matter; we must focus on issues of context, values, and power, as advocated by great social scientists from Aristotle and Machiavelli to Max Weber and Pierre Bourdieu. Finally, we must effectively and dialogically communicate the results of our research to our fellow citizens and carefully listen to their feedback. If we do this – focus on specific values and interests in the context of particular power relations – we may successfully transform organization research into an activity performed in public for organizational publics, sometimes to clarify, sometimes to intervene, sometimes to generate new perspectives, and always to serve as eyes and ears in ongoing efforts to understand the present and to deliberate about the future. We may, in short, arrive at organization research that matters.
This article addresses three main issues. First, it argues that David Laitin, in a misguided critique of Bent Flyvbjerg’s book, "Making Social Science Matter," for being a surrogate manifesto for Perestroika, misrepresents the book in the extreme. Second, the article argues that Laitin’s claim that political science may become normal, predictive science in the natural science sense is unfounded; the claim is a dead end that perestroikans try to get beyond. Finally, the article proposes that political scientists substitute phronesis for episteme and, thereby, avoid the trap of emulating natural science. By doing so, political scientists may arrive at social science that is strong where natural science is weak: in the reflexive analysis and discussion of values and interests aimed at praxis, which is the prerequisite for an enlightened political, economic, and cultural development in any society.
This paper presents results from the first statistically significant study of traffic forecasts in transportation infrastructure projects. The sample used is the largest of its kind, covering 210 projects in 14 nations worth US$58 billion. The study shows with very high statistical significance that forecasters generally do a poor job of estimating the demand for transportation infrastructure projects. The result is substantial downside financial and economic risk. Forecasts have not become more accurate over the 30-year period studied. If techniques and skills for arriving at accurate demand forecasts have improved over time, as often claimed by forecasters, this does not show in the data. For nine out of ten rail projects, passenger forecasts are overestimated; average overestimation is 106%. For 72% of rail projects, forecasts are overestimated by more than two-thirds. For 50% of road projects, the difference between actual and forecasted traffic is more than ±20%; for 25% of road projects, the difference is larger than ±40%. Forecasts for roads are more accurate and more balanced than for rail, with no significant difference between the frequency of inflated versus deflated forecasts. But for both rail and road projects, the risk is substantial that demand forecasts are incorrect by a large margin. The causes of inaccuracy in forecasts are different for rail and road projects, with political causes playing a larger role for rail than for road. The cure is more accountability and reference class forecasting. Highly inaccurate traffic forecasts combined with large standard deviations translate into large financial and economic risks. But such risks are typically ignored or downplayed by planners and decision-makers, to the detriment of social and economic welfare. The paper presents the data and approach with which planners may begin valid and reliable risk assessment.
This article tells a cautionary tale of the pitfalls surrounding small business tax policy, illustrated with some examples from the United Kingdom. The heterogeneous nature of the small business sector makes it unlikely that tax reliefs and incentives based purely on size will be well targeted. Many small firms are not entrepreneurial and do not wish to grow but it is nonetheless predictable and rational for them to seek to take advantage of any tax incentives made available on the basis of size or legal organisational form. In addition, tax policy that concentrates on the provision of incentives for small firms is likely to result in complexity, the proliferation of thresholds and frequency of change, the costs of which may well outweigh any advantages to the smallest firms. The compliance costs resulting from taxation are regressive, but so too are the costs of dealing with special reliefs. A central problem relates to the drawing of a line between income from capital and income from labour for tax purposes. In seeking to do this, the tax system needs to have regard to legal as well as economic realities. Small business tax policy needs to be based on a better understanding of the way in which businesses form and develop and their motivations and difficulties. Stability and predictability may be more important than special reliefs.
In the European Union and in many federal and non-federal countries, the central government pays subsidies to poor regions. These subsidies are often seen as a redistributive measure which comes at the cost of an efficiency loss. This paper develops an economic rationale for regional policy based on economic efficiency. We consider a model of a federation consisting of a rich and a poor region. The economy is characterized by imperfect competition in goods markets and unemployment. Firms initially produce in the rich region but may relocate their production to the poor region. We show that a subsidy on investment in the poor region unambiguously increases welfare if labour markets are competitive. If there is unemployment in both regions, the case for regional subsidies is weaker.
This paper models, and experimentally simulates, the free-rider problem in a takeover when the raider has the option to “resolicit,” that is, to make a new offer after an offer has been rejected. In theory, the option to resolicit, by lowering offer credibility, increases the dissipative losses associated with free riding. In practice, the outcomes of our experiment, while quite closely tracking theory in the effective absence of an option to resolicit, differed dramatically from theory when a significant probability of resolicitation was introduced: The option to resolicit reduced the costs of free riding fairly substantially. Both the raider offers and the shareholder tendering responses generally exceeded equilibrium predictions.
Traditional modes of researching health service practice and policy have been criticised in terms of timeliness and relevance by potential users, and of poor uptake by researchers themselves. Newer models, based on principles of collaboration and integration, are being used in some healthcare settings, and there is evidence that these approaches help to bridge the gap that exists between ‘research’ and ‘practice’. This article describes the work of a small research team within the former NHS Modernisation Agency, appropriately named Research into Practice, and reflects on its experience of using collaborative and integrative approaches to its work.
Our purpose in this paper is to produce a tractable model which illuminates problems relating to individual bank behaviour and risk-taking, to possible contagious interrelationships between banks, and to the appropriate design of prudential requirements and incentives to limit excessive risk-taking. Our model is rich enough to include heterogenous agents (commercial banks and investors), endogenous default, and multiple commodity, and credit and deposit markets. Yet, it is simple enough to be effectively computable. Financial fragility emerges naturally as an equilibrium phenomenon. In our model a version of the liquidity trap can occur. Moreover, the Modigliani-Miller proposition fails either through frictions in the (nominal) financial system or through incentives, arising from the imposed capital requirements, for differential investment behaviour because of capital requirements. In addition, a non-trivial quantity theory of money is derived, liquidity and default premia co-determine interest rates, and both regulatory and monetary policies have non-neutral effects. The model also indicates how monetary policy may affect financial fragility, thus highlighting the trade-off between financial stability and economic efficiency.
This paper extends the model proposed by Goodhart, Sunirand, and Tsomocos (2003,2004a, b) to an infinite horizon setting. Thus, we are able to assess how the model conforms with the time series data of the U.K. banking system. We conclude that, since the model performs satisfactorily, it can be readily used to assess financial fragility given its flexibility, computability, and the presence of multiple contagion channels and heterogeneous banks and investors.
This article uses a mid-century text to reengage a late-1970s concept to answer a new century question. The authors return to Alvin Gouldner's classic (1954) study Patterns of Industrial Bureaucracy to reexamine the "coupling" concept in contemporary institutionalism in a way that engages the following question: How do new institutional forms emerge? Based on Gouldner's detailed observations of work in a gypsum mine, the authors argue that coupling processes are key mechanisms in the emergence of institutional forms. Examining coupling as a dynamic process and activity helps us to understand how the institution of bureaucracy emerged in the gypsum mine and interacted with previous social orders of authority and control. Gouldner's account of coupling at the mine is a story of formal and informal power struggles and active conflict over meaning, bringing the process of local institutional formation into sharp relief.
Organizational sociologists often treat institutions as macro cultural logics, representations, and schemata, with less consideration for how institutions are "inhabited" (Scully and Creed, 1997) by people doing things together. As such, this article uses a symbolic interactionist rereading of Gouldner's classic study Patterns of Industrial Bureaucracy as a lever to expand the boundaries of institutionalism to encompass a richer understanding of action, interaction, and meaning. Fifty years after its publication, Gouldner's study still speaks to us, though in ways we (and he) may not have anticipated five decades ago. The rich field observations in Patterns remind us that institutions such as bureaucracy are inhabited by people and their interactions, and the book provides an opportunity for intellectual renewal. Instead of treating contemporary institutionalism and symbolic interaction as antagonistic, we treat them as complementary components of an "inhabited institutions approach" that focuses on local and extra-local embeddedness, local and extra-local meaning, and a skeptical, inquiring attitude. This approach yields a doubly constructed view: On the one hand, institutions provide the raw materials and guidelines for social interactions ("construct interactions"), and on the other hand, the meanings of institutions are constructed and propelled forward by social interactions. Institutions are not inert categories of meaning; rather they are populated with people whose social interactions suffuse institutions with local force and significance.
Multi-sector collaboration is an important means of achieving strategic, cross-sectoral change. However, it also presents significant managerial challenges. In this paper, we examine these challenges in the case of a multi-sector collaboration formed to address treatment issues in the Canadian HIV/AIDS domain. Based on this case, we develop a framework for understanding multi-sector collaboration as a series of conversations in which participants must successfully juggle their dual roles of collaborative partner and organisational representatives.
This paper examines the processes of organizational adaptation and competitiveness of firms in an emerging economy. The study is set in the Argentinian context of the 1990s when a combination of economic and political change triggered a massive change in the competitive context of indigenous firms. Two highly flexible firms and two less-flexible firms are studied from the pharmaceutical and edible oil industries and longitudinal data are supplied to explore the determinants of organizational flexibility in those organizations.
This paper provides a new explanation for the use of convertible securities in venture capital. A key property of convertible preferred equity is that it allocates different cash flow rights, depending on whether exit occurs by acquisition or IPO. The paper builds a model with double moral hazard, where both the entrepreneur and the venture capitalist provide value-adding effort. The optimal contract gives the venture capitalist more cash flow rights in acquisitions than IPOs. This explains the use of convertible preferred equity, including automatic conversion at IPO. Contingent control rights are also important for achieving efficient exit decisions.
We draw on a series of in-depth interviews with senior managers from institutional investors and large listed corporations to explore how different conceptualizations of institutional investors, their role in the corporate governance process, and their interactions with corporate management, are reflected in the accounts of the actors concerned. We find that the conceptualizations in terms of ownership and agency that dominate both academic and popular discourses are marginal to the actors’ accounts. Rather, both fund managers and company managers conceptualize institutional investors primarily as financial traders who happen, as a result of their trading, to control key resources, but whose interests are effectively divorced from those of long-term share owners. Our analysis suggests that far from being mitigated by the large share blocks of institutional investors, as some commentators have suggested, the separation of ownership from control has been compounded in the UK by a separation of accountability from responsibility, with the interests of the institutions holding managers accountable being quite different from those of the owner-beneficiaries to whom they feel responsible. This raises significant challenges for corporate governance policy as well as new issues for research into the governance process.
Strategy workshops, the practice of taking time out from day-to-day routines to deliberate on the longer-term direction of the organisation, are a common practice, yet surprisingly little is known about them. This article presents the first substantial exploration of the role of workshops in strategy development through a large-scale UK survey of managerial experience of these events. The findings, based on 1,337 returns, show that strategy workshops play an important part in formal strategic planning processes; that they rely on discursive rather than analytical approaches to strategy formation; and that they typically do not include middle managers, rather reinforcing elitist approaches to strategy development. The authors conclude that strategy workshops are important vehicles for the emergence of strategy and discuss the implications of their findings for management practice and future research.
Dr Alison Holmes et al introduce HOMIP (Hammersmith Organisational Model for Infection Prevention) - the model that aims to embed infection prevention and control within the structure of an Acute Trust.
Mention the term value chain, and most managers will have visions of a neat sequence of value-enhancing activities. In the simplest form of a value chain, raw materials are formed into components that are assembled into final products, distributed, sold and serviced. Frequently, these activities span multiple organizations. This orderly progression allows managers to formulate profitable strategies and coordinate operations. But it can also put a stranglehold on innovation at a time when the greatest opportunities for value creation (and the most significant threats to long-term survival) often originate outside the traditional, linear view.
Traditional value chains may have worked well for landline telecommunications and automobile production during the last century, but innovation today comes in many shapes and sizes — and often unexpectedly. (See “About the Research.”) This argues for seeing value creation as multidirectional rather than linear.1 Given the constant tension between opportunity and threat, companies need to explore opportunities for managing risks, gaining additional influence over customer demand and generating new ways to create customer value. Mobile phone giant Nokia Corp., for example, is legendary for having had the foresight to lock in critical components that were in short supply, allowing it to achieve significant market share growth. However, Nokia suffered a setback a few years ago when competitors used that very same strategy to take advantage of shifts in the demand for LCD displays.
Protection against such fickle reversals calls for a more complex view of value — one that is based on a grid as opposed to the traditional chain. The grid approach allows companies to move beyond traditional linear thinking and industry lines and map out novel opportunities and threats. This permits managers to identify where other companies — perhaps even those engaged in entirely different value chains — obtain value, line up critical resources or influence customer demand.
We formally incorporate the option to gather information into a game and thus endogenize the information structure. We ask whether models with exogenous information structures are robust with respect to this endogenization. Any Nash equilibrium of the game with information acquisition induces a Nash equilibrium in the corresponding game with an exogenous structure. We provide sufficient conditions on the structure of the game for which this remains true when ‘Nash’ is replaced by ‘sequential’. We characterize the (sequential) Nash equilibria of games with exogenous information structures that can arise as a (sequential) Nash equilibrium of games with endogenous information acquisition.
Unlike in the U.S., the initial price range for European IPOs is seldom revised, although issues are often priced at the upper bound. We develop a model that explains this seemingly inefficient pricing behavior. As in Europe, but not in the U.S., underwriters in the model obtain information from investors before establishing the indicative price range. A commitment to stay within the range is necessary to extract private information from investors. Ours is therefore the first treatment in which the bookbuilding range has a clear economic role. The model has important implications for empirical research based on European primary market data.
I study whether the capital gains tax is an impediment to selling by some investors and if so, to what degree associated delayed selling affects stock prices. I find that selling decisions by institutions serving tax-sensitive clients are sensitive to cumulative capital gains, a pattern not observed for institutions with predominantly tax-exempt clients. Moreover, tax-related underselling impacts stock prices during large earnings surprises for stocks held primarily by tax-sensitive investors. The corresponding price reactions are less negative (more positive) with higher cumulative capital gains. This price pressure pattern is more severe when arbitrage is more costly.
Purpose - To find out if the beta from the capital asset pricing model (CAPM) accurately measures the systematic equity risk of a firm's pension funds.
Design/methodology/approach -Takes 4,453 observations of equity beta using the market model on weekly return data for up to one year from 1993 to 1998, adds firm risk (weighted average beta for equity and risk) and pension risk (pension asset risk minus pension liability risk) obtained from ERISA Form 5500. Tests results for robustness.
Findings - Finds that, in theory, following Merton (2002) the cost of capital is severely overvalued if pension assets and liabilities are ignored, or if the risk of the pension plan is ignored. However, in the market, shows that prices reflect the risk level of the pension funds.
Research limitations/implications - Points out that these findings predate the pension fund deficit crisis of the 2000s. Notes that the market does not differentiate between operational and pension fund risk, and that the distortions caused by this risk to the calculation of cost of capital need further research.
Practical implications - Indicates that project financing needs to be recalculated to reflect the pension fund risk; otherwise good projects may be rejected.
Originality/value - Presents the view that pension fund risk is detected by the markets, even if the data are not easily available, by means unknown.
Morck, Yeung and Yu show that R2 is higher in countries with less developed financial systems and poorer corporate governance. We show how control rights and information affect the division of risk bearing between managers and investors. Lack of transparency increases R2 by shifting firm-specific risk to managers. Opaque stocks with high R2s are also more likely to crash, that is, to deliver large negative returns. Using stock returns from 40 stock markets from 1990 to 2001, we find strong positive relations between R2 and several measures of opaqueness. These measures also explain the frequency of crashes.
Venture philanthropy provides a blend of performance-based development finance and professional services to social purpose organisations in helping them expand their social impact. This is a high-engagement, partnership approach analogous to the practices of venture capital in building the commercial value of companies.
In its modern form, venture philanthropy developed significantly in the US in the mid 1990s, took hold in the UK from 2002 and is now expanding into continental Europe.
Although not without its sceptics, venture philanthropy has the potential to contribute to developing a more responsive and diverse capital market for the social sector. Its focus on building organisational capacity in entrepreneurial social purpose organisations, matching appropriate finance with strategic business-like advice, makes it a distinctive provider of capital.
Venture philanthropy in Europe has strong links to the private equity and venture capital community, giving it opportunities to influence the corporate social responsibility of a set of major players in Europe's financial services industry. Several new venture philanthropy funds have been established by philanthropists with successful careers in private equity.
Europe's transitional countries, in Central and Eastern Europe, the Baltic States and former Soviet Union, have under-capacitated social sectors and widespread, unmet social needs. Venture philanthropy may have a particularly valuable role to play in helping build stronger civil society institutions in these countries.
As a relatively young industry, venture philanthropy faces many challenges in communicating and marketing what it does; developing a menu of financial instruments and advisory services; measuring performance and social impact; collaborating with complementary capital providers such as foundations.
This working paper is the first in a series which explores the expansion of high engagement philanthropy in Europe.
This article re-connects with structure-agency debates to explore the development of the social work assistant role. Drawing upon an analytical framework based on the tenets of critical realism, it seeks to explain the evolution of this role across three local authorities by looking at the interaction of structure and agency at different societal levels: the sub-sector, the organization and the workplace. In doing so, it establishes the analytical value of the structure-agency dualism in studying occupations and, at the same time, provides data on what employees do in the type of role increasingly likely to characterize the modern service economy.
This article focuses on an ad hoc body, the UK Local Government Pay Commission, as a means of developing a broader argument on the relationship between collective dispute resolution and the nature of public service reform. It suggests that the character of disputes in this sector and the `fitness for purpose' of those institutions designed to resolve them are critically related to an industrial relations agenda, which is in turn shaped by changing forms of public service provision. An extended period of public service `modernization' in local government led to a restructuring of collective bargaining with substantive and procedural consequences: new issues emerged and related tensions arose that could not be managed by the traditional bargaining machinery or by the use of existing third-party conciliation or arbitration mechanisms. It required relatively novel arrangements in the form of the LGPC whose nature and operation and lessons are discussed.
Purpose – The purpose of this article is to evaluate employee perceptions of pay practice in civil service executive agencies in the wake of changes in the established institutions of pay determination.
Design/methodology/approach – A survey design drawing original data from 1,057 civil servants, all members of the IPMS (now merged with EMA to form Prospectus), the union representing scientific, technical and professional occupations in the civil service.
Findings – The study distinguishes four distinctive pay practice systems. Pay satisfaction is found to be positively related to two principles: a clear effort-reward link and an understanding of pay criteria. However, employees are more satisfied with pay when their organisational pay system accords with traditional rather than newer practices. This suggests that embedded norms continue to exert a powerful influence over employee perceptions of pay.
Research limitations/implications – Whilst the respondent profile accurately reflects those working in the scientific, professional and technical grades (predominantly male, white, full-time workers), aspects of this profile do not accurately reflect the civil service as a whole.
Practical implications – Old habits “die hard”. A sobering message for those practitioners who readily assume that forced change in pay systems will elicit “desired” employee responses.
Originality/value – Against a backdrop of fundamental changes in the character of pay determination in the civil service, this study presents employee perceptions of pay practices, shows how they combine in ways that reflect a distinct set of pay systems and reveals the impact associated with these systems on attitudes and behaviours.
We consider the consequences of competition between two types of experimental exchange mechanisms, a “decentralized bargaining” market, and a “centralized” market. It is shown that decentralized bargaining is subject to a process of “unraveling” in which relatively high value traders (buyers with a high willingness to pay and sellers with low costs) continuously find trading in the centralized markets more attractive until few opportunities for mutually beneficial trade remain outside the centralized marketplace.
Chinese translation by Zhang Beisheng and Diao Xiaojing.
Providing a retrospective and prospective overview of organization studies, the Handbook continues to challenge and inspire readers with its synthesis of knowledge and literature. As ever, contributions have been selected to reflect the diversity of the field. New chapters cover areas such as organizational change; knowledge management; and organizational networks.
Part One reflects on the relationship between theory, research and practice in organization studies.
Part Two address a number of the most significant issues to affect organization studies such as leadership, diversity and globalization.
Comprehensive and far-reaching, this important resource will set new standards for the understanding of organizational studies. It will be invaluable to researchers, teachers and advanced students alike.
We consider methods for quantifying the similarity of vertices in networks. We propose a measure of similarity based on the concept that two vertices are similar if their immediate neighbors in the network are themselves similar. This leads to a self-consistent matrix formulation of similarity that can be evaluated iteratively using only a knowledge of the adjacency matrix of the network. We test our similarity measure on computer-generated networks for which the expected results are known, and on a number of real-world networks.
In this paper, we studied the research areas of Chinese natural science basic research from a point view of complex network. Two research areas are considered to be connected if they appear in one fund proposal. The explicit network of such connections using data from 1999 to 2004 is constructed. The analysis of the real data shows that the degree distribution of the bf research areas network (RAN) may be better fitted by the exponential distribution. It displays small world effect in which randomly chosen pairs of research areas are typically separated by only a short path of intermediate research areas. The average distance of RAN decreases with time, while the average clustering coefficient increases with time, which indicates that the scientific study would like to be integrated together in terms of the studied areas. The relationship between the clustering coefficient C(k) and the degree k indicates that there is no hierarchical organization in RAN.
In this paper, the relationship between the in-degree and out-degree of World-Wide Web is studies. At each time step, a new node with out-degree kout is added, where kout obeys the power-law distribution and its mean value is m. The analytical and simulation results suggest that the exponent of in-degree distribution would be ?i=2+1/m, depending on the average out-degree. This finding is supported by the empirical data, which has not been emphasized by the previous studies on directed networks.
In this article, we proposed a susceptible-infected model with identical infectivity, in which, at every time step, each node can only contact a constant number of neighbors. We implemented this model on scale-free networks, and found that the infected population grows in an exponential form with the time scale proportional to the spreading rate. Further more, by numerical simulation, we demonstrated that the targeted immunization of the present model is much less efficient than that of the standard susceptible-infected model. Finally, we investigated a fast spreading strategy when only local information is available. Different from the extensively studied path finding strategy, the strategy preferring small-degree nodes is more efficient than that preferring large-degree nodes. Our results indicate the existence of an essential relationship between network traffic and network epidemic on scale-free networks.
The comparative institutionalist approach to differences in national business systems necessarily highlights variations in the workings of contemporary capitalism. Less emphasis is given to similarities in historical events leading to the institutionalization of eventual forms of governance. These must include a common underlying ideational orientation of political elites to develop an industrial base from which to expand GDP. This urge to industrialize has led to a second common phenomenon — the importance of the family- or personally controlled business group (FBG). The alliance between political state and family business group is evidently a long-standing feature of national business systems. This paper suggests that such relational capitalism might also be seen as dangerously dependent on national communities that are homogenous in respect to ethnic identity. In South-East Asia, relations between the state and FBG can exacerbate inequities among multi-ethnic communities and provide institutionalized blockage to technological innovation. However, a different element of this process between ASEAN and earlier industrializers is the presence of foreign MNEs, which intrude upon state–FBG coupling and provide new options in the formation of local markets for labour and capital.
We study transport properties such as electrical and frictionless flow conductance on scale-free and Erdős–Rényi networks. We consider the conductance G between two arbitrarily chosen nodes where each link has the same unit resistance. Our theoretical analysis for scale-free networks predicts a broad range of values of G, with a power-law tail distribution , where gG=2λ−1, where λ is the decay exponent for the scale-free network degree distribution. We confirm our predictions by simulations of scale-free networks solving the Kirchhoff equations for the conductance between a pair of nodes. The power-law tail in leads to large values of G, thereby significantly improving the transport in scale-free networks, compared to Erdős–Rényi networks where the tail of the conductivity distribution decays exponentially. Based on a simple physical ‘transport backbone’ picture we suggest that the conductances of scale-free and Erdős–Rényi networks can be approximated by ckAkB/(kA+kB) for any pair of nodes A and B with degrees kA and kB. Thus, a single quantity c, which depends on the average degree of the network, characterizes transport on both scale-free and Erdős–Rényi networks. We determine that c tends to 1 for increasing , and it is larger for scale-free networks. We compare the electrical results with a model for frictionless transport, where conductance is defined as the number of link-independent paths between A and B, and find that a similar picture holds. The effects of distance on the value of conductance are considered for both models, and some differences emerge. Finally, we use a recent data set for the AS (autonomous system) level of the Internet and confirm that our results are valid in this real-world example.
– This paper examines the diversification of services and activities by freight forwarders in the UK. Following similar studies conducted in the USA, the paper analyses the trends towards service and revenue diversification that has been observed in this sector.
– The study is based on a survey of 100 UK‐based freight forwarders‐based and empirically tests the firms' respective revenue generation structures, as well as the range of services offered. The survey is complemented by semi‐structured interviews at a further four companies in order to provide additional contextual explanations of the empirical findings.
– The results show that diversification appears to be closely related to both company size and a diversified asset base. The motivation for diversification stems mainly from a perceived erosion of the traditional freight forwarding revenue streams, as companies are seeking higher profit margins outside their traditional core business, while addressing the increasingly comprehensive needs of their customers at the same time. The findings show that, although diversification is much less prominently seen in their revenue structures, companies are quite diverse in terms of the services offered. Service diversification was found to be a strategy predominantly followed by the larger companies with wider asset bases.
– The freight forwarding industry is experiencing significant volatility as a result of technological advances, regulatory changes, customer pressures and increased competition. This study provides the empirical clarification needed for freight forwarding companies to derive a business strategy appropriate to their respective settings.
– Previous studies have largely reported findings from research conducted in North America, which features a structurally very different population of freight forwarders and logistics operators. This study presents the status quo and trends of diversification in the UK, which features a population of considerably smaller firms and thus requires a different decision framework towards adopting a diversification strategy.
In The World's Newest Profession, Christopher McKenna offers a history of management consulting in the twentieth century. Although management consulting may not yet be a recognized profession, the leading consulting firms have been advising and reshaping the largest organizations in the world since the 1920s. This groundbreaking study details how the elite consulting firms, including McKinsey & Company and Booz Allen & Hamilton, expanded after US regulatory changes during the 1930s, how they changed giant corporations, nonprofits, and the state during the 1950s, and why consultants became so influential in the global economy after 1960. As they grew in number, consultants would introduce organizations to 'corporate culture' and 'decentralization' but they faced vilification for their role in the Enron crisis and for legitimating corporate blunders. Through detailed case studies based on unprecedented access to internal files and personal interviews, The World's Newest Profession explores how management consultants came to be so influential within our culture and explains exactly what consultants really do in the global economy....
Theorists often presume that popular writers are the translators and not the underlying source of new management ideas. John McDonald, in contrast, was a business journalist whose early interest in game theory was an important catalyst in the development of corporate strategy as an academic discipline. Unfortunately, few scholars remember McDonald's influential role as the ghost-writer for Alfred Sloan's memoir, My Years with General Motors, or McDonald's prescient decision to hire the young business historian, Alfred Chandler, to serve as their research assistant. Not only should management histo rians not judge a book by its cover but neither should they presume that the author actually wrote the book.
Accounts of the spread of management ideas emphasizing the role of ‘supply-side’ actors underplay the active role recipients play in translating them into new and different forms. Comparing firms undergoing a similar process and looking at how a specific event unfolded, this paper aims to extend understanding of the concept of translation. It examines how ideas are rendered appropriate to a new setting through translation from the broad policy level into a set of specific practices. To do this, it looks at how a proposal to introduce lean management into the construction industry was applied within a set of firms and the projects they were undertaking. In the context of large ‘distance’ between the original arenas of the idea and its new one, the paper uncovers how the editing rules that are said to guide the process of translation are operationalized using a set of change interventions.
This paper advances a relational sociology of organization that seeks to address concerns over how organizational action is understood and situated. The approach outlined here is one which takes ontology seriously and requires transparency and consistency of position. It aims at causal explanation over description and/or prediction and seeks to avoid pure voluntarism or structural determinism in such explanation. We advocate relational analysis that recognizes and engages with connections within and across organization and with wider contexts. We develop this argument by briefly reviewing three promising approaches: relational pragmatism, the social theorizing of Bourdieu and critical realism, highlighting their ontological foundations, some similarities and differences and surfacing some methodological issues. Our purpose is to encourage analysis that explores the connections within and between perspectives and theoretical positions. We conclude that the development of the field of organization theory will benefit from self conscious and reflexive engagement and debate both within and across our various research positions and traditions only if such debates are conducted on the basis of holistic evaluations and interpretations that recognize (and value) difference.
Based on a review of available data from a database on large‐scale transport infrastructure projects, this paper investigates the hypothesis that traffic forecasts for road links in Europe are geographically biased with underestimated traffic volumes in metropolitan areas and overestimated traffic volumes in remote regions. The present data do not support this hypothesis. Since previous studies have shown a strong tendency to overestimated forecasts of the number of passengers on new rail projects, it could be speculated that road planners are more skilful and/or honest than rail planners. However, during the period when the investigated projects were planned (up to the late 1980s), there were hardly any strong incentives for road planners to make biased forecasts in order to place their projects in a more flattering light. Future research might uncover whether the change from the ‘predict and provide’ paradigm to ‘predict and prevent’ occurring in some European countries in the 1990s has influenced the accuracy of road traffic forecasts in metropolitan areas.
Actor-network theory (ANT) has contributed greatly to the development of science and technology studies. However, recent critiques appear to have left ANT in a gloomy theoretical black box. What is the likelihood of ANT exiting its current theoretical discontent? Is ANT worthy of salvation and on what grounds? Law argues that recent critiques stem from ANT’s development into a particular theoretical strategy. However, this article will argue that by focusing on strategy as messy and impure, ANT can be afforded the opportunity to shift from a fixed approach to an ambiguous and contingent strategy, well placed to carry on. The article achieves such an argument by first highlighting how ANT has contributed to a recent study of strategy in action; second, by outlining the strategic aspects of ANT; and third, by using the study of strategy in action as a means of engaging with ANT’s current theoretical discontent.
Mobility is a frequently recurring theme in recent debates around the emergence of new technologies. However, with this increasing attention paid to mobility, how does ‘immobility’ become notable as an absence of mobility? How are such perceptions of immobility used to occasion assessments of motive, intent and moral standing? This paper features a sociological interrogation of examples of immobility made notable through expectations of mobility. It utilises a study of CCTV as its principle example of the constitution and assessment of mobility and immobility. The paper explores theoretical strategies available for interrogating these issues. It concludes through an engagement with the boundaries constituted around mobility and immobility. The ways in which forms of assessment operate through, and further maintain, these boundaries are considered.
The number of closed-circuit television (CCTV) cameras in British town centres has rapidly increased in recent years. These increases are mirrored in Europe and North America. In Britain many of these cameras videotape town centres, 24 hours a day, 365 days a year. How do CCTV systems account for the space of the town centres in which they operate? What theoretical sensibilities can we use to engage with CCTV spatial accounting? To what extent do terms such as `professional,' `legal', `sociotechnical', and `mundane' enable adequate renditions of spatial-accounting activity? In this paper I will argue that engagement with the accomplishment of mundane public flows and specific incidents of accountable otherness can initiate a discussion of these questions and initiate an alternative to panoptic renditions of CCTV. The discussion will seek to draw together a potentially tense and disruptive theoretical combination of ethnomethodology and science and technology studies.
Purpose-The paper aims to investigate the value of a network perspective in enhancing the understanding of the business to consumer marketing of high-involvement product categories. This is achieved through the analysis of the development of fair trade marketing in the UK. Design/methodology/approach-The paper addresses the research question through an analysis of relevant literatures from both marketing and other disciplinary areas. The paper is thus multidisciplinary in nature. Findings from a series of in depth, semi-structured interviews with senior representatives of a fair trade wholesaler, of a specialist fair trade brand, of supermarket retailers involved with fair trade and of other fair trade labelling and support organisations are reported and discussed. Findings-The relevance of an actor network theory (ANT) informed interpretation of the development of the fair trade marketing network is revealed. Its emphases on the processes of exchange and the role of human and non-human actants in enabling interactions within the network are shown to be important. fair trade marketing is shown as occurring within an unfolding network of information exchanges. Analysis of this emerging network highlights a shift of emphasis in fair trade marketing from the fair trade process to fair trade products and, latterly, fair trade places. Originality/value-The paper highlights the requirement for further conceptualisation of the business to consumer marketing of high-involvement product categories, and reveals the potential of ANT as one approach to meet this need. The paper also provides a detailed insight into the development of fair trade marketing in the UK.
Children are increasingly being recognised as a significant force in the retail market place, as primary consumers, influencers of others, and as future customers. This paper adds to the literature on children as consumers by exploring their attitudinal responses to a specific group of products: Fair Trade lines. There has been no research to date that has specifically addressed children as consumers of Fair Trade or the ethical purchase decision-making process in this area. The methodological approach taken here is an essentially interpretive and naturalistic analysis of two focus groups of school children. The analysis found that there is an urgent need to develop meaningful Fair Trade brands that combine strong brand knowledge and positive brand images to bridge the ethical purchase gap between the formation of clear ethical attitudes and actual ethical purchase behaviour. Such an approach would both capture more of the children’s primary market and influence future purchase behaviour. It is argued that Fair Trade actors should coordinate new marketing communications campaigns that build brand knowledge structures holistically around the Fair Trade process and that extend beyond merely raising consumer awareness.
In this paper, we examine acquisitions of two financially distressed retailers-Federated's takeover of Macy's, and Zell Chilmark's takeover of Carter Hawley Hale. In both cases the raider purchased some of the target's outstanding debt to launch its takeover attempt. These debt purchases appear to have been facilitated by two salient factors-the raider's expertise in dealing with distressed firm restructuring and the ability of the raider to acquire a large blockholding of debt. Our analysis indicates that, when these factors are present, it is optimal for a raider to initiate a takeover of a distressed firm through purchasing a block of the firm's debt. Target bondholder reaction will be favorable whereas shareholder reaction may be either favorable or unfavorable.
This paper embeds security design in a model of evolutionary learning. We consider a competitive and perfect financial market where agents, as in Allen and Gale (1988), have heterogeneous valuations for cash flows. Our point of departure is that, instead of assuming that agents are endowed with rational expectations, we model their behavior as the product of adaptive learning. Our results demonstrate that adaptive learning profoundly affects security design. Securities are mispriced even in the long run and optimal designs trade off under pricing against intrinsic value maximization. The evolutionary dominant security design calls for issuing securities that engender large losses with a small but positive probability, and otherwise produce stable payoffs. These designs are almost the exact opposite of the pure state claims which are optimal in the rational expectations framework.
A decade after it first published to international acclaim, the seminal Handbook of Organization Studies has been updated to capture exciting new developments in the field.
Providing a retrospective and prospective overview of organization studies, the Handbook continues to challenge and inspire readers with its synthesis of knowledge and literature. As ever, contributions have been selected to reflect the diversity of the field. New chapters cover areas such as organizational change; knowledge management; and organizational networks.
Part One reflects on the relationship between theory, research and practice in organization studies.
Part Two address a number of the most significant issues to affect organization studies such as leadership, diversity and globalization.
Comprehensive and far-reaching, this important resource will set new standards for the understanding of organizational studies. It will be invaluable to researchers, teachers and advanced students alike.
This essay suggests ‘Innovation Journalism’ as a useful theme through which to explore the interplay of journalism in innovation ecosystems. This involves investigating how journalism plays a part in connecting innovation with public interests and how innovation processes and innovation ecosystems interact with public attention, with news media as an actor. It may also be of interest to study in which ways journalists cover innovation processes and innovation ecosystems, the incentives that drive innovation journalism and how news organizations may be organized to perform the task. We outline examples of research project topics to illustrate how this approach can inform studies of innovation, studies of journalism as practice, and possible scopes for the research theme. Going forward, we propose to identify earlier relevant scholarly research and relevant researchers that can be attributed to this emerging research theme in Innovation Journalism.
In the growing and dynamic economy of 19th century America, businesses sold vast quantities of goods to one another, mostly on credit. This book explains how business people solved the problem of whom to trust - how they determined who was deserving of credit, and for how much.
This article presents a response to Jaco Lok and Hugh Wilmott's comments on a 2004 paper by Professors N. Phillips, T.B. Lawrence, and C. Hardy on the discursive approach to institutional theory. The authors believe that the exploration of discourse analysis as a new avenue for future institutional research was a useful and feasible project. They also feel that it could help overcome some of the problems currently faced by institutional theory, particularly when explaining processes of institutionalization or the nature of institutions. Lok and Willmott's two key issues are identified and discussed. Lok and Willmott question our definition of discourse and claim that the authors have not gone far enough in their attempt to reconciling institutional theory with a better developed concern for power and politics.
We integrate and extend research on causal ambiguity, indicating the principal causal paths from ambiguity to performance and discussing the connections between empirical findings and resource-based expectations. We then develop the linkage between causal ambiguity and management perception. Drawing on experimental and field research, we give testable propositions linking ambiguity, perception, and firm performance; integrate this research with studies of causal ambiguity; and suggest directions for future causal ambiguity research.
This investigation examines the response of customers to a salesperson whose feelings ostensibly differ from those that customers are personally experiencing. When customers in a bad mood encounter a salesperson who appears to be happy, they feel even worse than they otherwise would. These feelings, in turn, decrease their evaluations of the products the salesperson is promoting. Finally, unhappy customers tend to avoid a happy salesperson unless the decision is sufficiently important that they are motivated to ignore the effects of their bad mood.
The establishment of supplier parks (i.e., the co-location of component suppliers in the immediate vicinity of the vehicle assembly plant), is a relatively recent phenomenon in the automotive industry. The first supplier park officially labeled as such was opened in Abrera, Spain, in 1992 to supply the nearby Seat assembly plant. Since then, more than 40 parks have been established, predominantly in Europe and newly industrialised countries. Although significant differences between individual supplier parks can be observed, a consistent classification of co-located supplier clusters is still lacking. Given that the benefits of different supply chain configurations are contingent upon various internal and external factors, this lack of classification poses an important gap in the existing debate. This article pursues three objectives: first, to ground this discussion by providing a consistent classification of the various forms that co-located supplier operations can take, second, to investigate the dependencies between the form of supplier parks and product- and supply chain related factors like product architecture and demand uncertainty, and finally, to assess the suitability of the different forms of supplier parks for various supply chain configurations
Purpose - To explore the effects of meetings between company executives and fund managers.
Design/methodology/approach - Recognizes the increasing importance of these face-to-face meetings, reviews relevant research and draws on information from interviews with finance directors and investor relations managers from 13 FTSE 100 companies (part of a larger study) and observations of meetings to assess their effects. Applies Foucault's ideas on the indivisibility of the power/knowledge nexus to look at information and control impacts of these meetings on investors and directors.
Findings - Meetings are an exercise of disciplinry power and acknowledge the rights of shareholders to monitor managers' performance and to hold them accountable. Executives see them as an opportunity to influence investors' perceptions, take great care to prepare for them and focus mainly on explaining company strategy. They may be forced to symbolize shareholder value but this gives them added power to speak for the shareholder within the business and thus affect strategy. This is reinforced by the increased use of performance-related pay and options.
Research limitations/implications - More work is needed and promised on the influence of fund managers.
Originality/value - Reflects on the personal and corporate impact of company/fund manager meetings.
In an economy with a fixed exchange rate regime that suffers a random adverse shock, we study the strategies of imperfectly and sequentially informed speculators that may trigger an endogenous devaluation before it occurs exogenously. The game played by the speculators has a unique symmetric Nash equilibrium which is a strongly rational expectation equilibrium in the set of all strategies with delay. Uncertainty about the extent to which the Central Bank is ready to defend the peg extends the ex ante mean delay between the exogenous shock and the devaluation. We determine endogenously the rate of devaluation.
Money, which provides liquidity, is distinct from debt. The introduction of a bank that issues money in exchange for debt and pays out its profit as dividend to shareholders modifies the model of overlapping generations. The set of equilibrium paths, their dynamic properties, as well as the scope and effectiveness of monetary policy are significantly altered: though low rates of interest are associated with superior steady state allocations, stability of the steady state may require a nominal rate of interest above a certain minimum: without production, a decrease in the nominal rate of interest may result in explosive behavior or convergence to an endogenous cycle, while in an economy with production, an increase in the nominal rate of interest may lead to indeterminacy and fluctuations.
The article articulates a new view of markets by reflecting a matriarchal philosophy and by drawing from the work of new age philosophers and feminist economists. The paper also employs a holistic and activist approach to research. The neo-pagan movement has shifted toward agitating for large-scale social change in recent years. It was stated that extracting the difference between the emergent earth-based philosophy and the world’s leading religions is an important first step. The neo-pagan movement is not anti-science, but it takes its inspiration from those forms of science that are grounded in observation and respect for connectivity, and departs from mechanistic, artificially separated units of analysis and conceptualizations of social life.
TV news are essentially cultural phenomena. Previous research suggests that the often-overlooked formal and implicit characteristics of newscasts may be systematically related to culture-specific characteristics. Investigating these characteristics by means of a frame-by-frame content analysis is identified as a particularly promising methodological approach. To examine the relationship between culture and selected formal characteristics of newscasts, we present an explorative study that compares material from the USA, the Arab world, and Germany. Results indicate that there are many significant differences, some of which are in line with expectations derived from cultural specifics. Specifically, we argue that the number of persons presented as well as the context in which they are presented can be interpreted as indicators of Individualism/Collectivism. The conclusions underline the validity of the chosen methodological approach, but also demonstrate the need for more comprehensive and theory-driven category schemes.
We construct a simple model where ex-ante homogeneous firms offer jobs with different wages and effort levels to ex-ante homogeneous workers. A minimum wage is shown to be welfare reducing because of its effect on the distribution of wages and effort.
Gardner's 1985 review has been a tremendous help to my research. For many years, I wouldn't leave home without it. I have no doubt that the sequel will also be of lasting use to me, as well as to numerous others, especially those starting out on research in this area. The updated review provides comprehensive coverage of a large and rapidly growing body of literature. A fair number of these papers are rather mathematical, and Gardner has done a sterling job of summarising their essential message and contribution. Comment and opinion is present throughout, which is surely beneficial as it provides us with the perspective of someone who seems to have read pretty much every paper that has ever been written on the subject. I could merrily comment on numerous sections of the paper that I find interesting, but, instead, I have opted to say a few words about the two topics highlighted at the very end of the paper, method selection and empirical validation, and then to describe a couple of finance applications that were touched on only briefly: volatility and quantile forecasting.
The transmitters of electricity in Great Britain are responsible for balancing generation and consumption. Although this can be done in the hour between closure of the market and real-time, off-loading or calling-up electricity at this late stage can be costly. Costs can be substantially reduced if the imbalance can be anticipated ahead of time and balanced by trading on the market. Efficient trading relies on accurate density forecasts for the Net Imbalance Volume, which is defined as the sum of all actions taken to balance the system. Forecasting this density is the focus of this paper. We break down the problem into point and volatility prediction. We evaluate density forecasts in terms of the economic benefit generated from trading advice resulting from the forecasts. Promising results were achieved using a seasonal ARMA model or a periodic AR model for point forecasting, with a simplistic approach to volatility forecasting.
Weather derivatives enable energy companies to protect themselves against weather risk. Weather ensemble predictions are generated from atmospheric models and consist of multiple future scenarios for a weather variable. They can be used to forecast the density of the payoff from a weather derivative. The mean of the density is the fair price of the derivative, and the distribution about the mean is important for risk management tools, such as value-at-risk models. In this empirical paper, we use 1- to 10-day-ahead temperature ensemble predictions to forecast the mean and quantiles of the density of the payoff from a 10-day heating degree day put option. The ensemble-based forecasts compare favourably with those based on a univariate time series GARCH model. Promising quantile forecasts are also produced using quantile autoregression to model the forecast error of an ensemble-based forecast for the expected payoff.
This empirical paper compares the accuracy of six univariate methods for short-term electricity demand forecasting for lead times up to a day ahead. The very short lead times are of particular interest as univariate methods are often replaced by multivariate methods for prediction beyond about six hours ahead. The methods considered include the recently proposed exponential smoothing method for double seasonality and a new method based on principal component analysis (PCA). The methods are compared using a time series of hourly demand for Rio de Janeiro and a series of half-hourly demand for England and Wales. The PCA method performed well, but, overall, the best results were achieved with the exponential smoothing method, leading us to conclude that simpler and more robust methods, which require little domain knowledge, can outperform more complex alternatives.
The knowledge-based view of the firm implies that the innovative performance of R&D based organisations is strongly influenced by the quality of their relational capital. However, the quality of the employment relationship has been underplayed in this perspective. A model is developed that tests the quality of three dimensions of the employment relationship – the psychological contract, affective commitment and knowledge-sharing behaviours – and their consequences for innovative performance amongst 429 R&D employees in six different science and technology based firms. Analysis found that affective commitment plays an important role in mediating psychological contract fulfilment on knowledge-sharing behaviour, which in turn is strongly related to innovative performance. More specifically, fulfilment of the job design dimension of the psychological contract has an independent positive association with innovative performance, whereas fulfilment of the performance pay dimension is negatively associated.
In a related piece**, we argue that the savings bond program can be revitalized to help families—especially low to moderate income households—save. Savings bonds are attractive savings vehicles with widespread consumer awareness. One specific suggestion is to allow recipients of federal tax refunds to easily direct some of their refunds to the purchase of savings bonds; in effect to ask the Treasury to “just keep some of my money.” In March and April 2006, in conjunction with H&R Block, we conducted a “dry run” of this type of refund-based savings program. First, while the results are very preliminary, untapped demand for bonds might be strong. Given that capturing 1% of tax refunds would increase gross bond sales by over 25%, even a small level of refunds can materially boost bond sales, and more importantly, support family savings. Second, without a change in practices of either the IRS and/or the BPD that create impediments for bond-buyers, a refund-based bond program will likely fail. However, a few changes in government practices could simplify this process considerably. While policymakers must be mindful of the costs of administering any program, a refund-based savings bond program might enjoy certain attractive marketing and administrative economies.
CircleLending, an innovative start-up, offered individuals the ability to set up and manage informal loans made between relatives and friends. The company must decide which market segment to focus on and then how much money to raise from investors. CircleLending is a pioneer in the informal lending market, a largely unstudied and little understood consumer finance segment. Asheesh Advani, the founder and CEO of CircleLending, must evaluate the relative attractiveness of various segments, including housing, small business, and other lending.
In this paper, we examine a little known aspect of mutual fund accounting, whereby funds do not use contemporaneous fund holdings to calculate net asset values. This practice, sanctioned under SEC Rule 2a-4, uses stale portfolio holdings and gives rise to deviations between reported net asset values (NAVs) and returns and the economic values of those quantities. Using both simulations and a new sample of fund transaction data, we establish that distortions in both NAVs and returns are fairly common, and we discuss the implications of this observation for fund practice and regulation.
Successful solutions to pressing social ills tend to consist of innovative combinations of a limited set of alternative ways of perceiving and resolving the issues. These contending policy perspectives justify, represent and stem from four different ways of organizing social relations: hierarchy, individualism, egalitarianism and fatalism. Each of these perspectives: (1) distils certain elements of experience and wisdom that are missed by the others; (2) provides a clear expression of the way in which a significant portion of the populace feels we should live with one another and with nature; and (3) needs all of the others in order to be sustainable. ‘Clumsy solutions’– policies that creatively combine all opposing perspectives on what the problems are and how they should be resolved – are therefore called for. We illustrate these claims for the issue of global warming.
We consider a dynamic model where traders in each period are matched randomly into pairs who then bargain about the division of a fixed surplus. When agreement is reached the traders leave the market. Traders who do not come to an agreement return next period in which they will be matched again, as long as their deadline has not expired yet. New traders enter exogenously in each period. We assume that traders within a pair know each other`s deadline. We define and characterize the stationary equilibrium configuration. Traders with longer deadlines fare better than traders with short deadlines. It is shown that the heterogeneity of deadlines may cause delay. It is then shown that a centralized mechanism that controls the matching protocol, but does not interfere with the bargaining, eliminates all delay. Even though this efficient centralized mechanism is not as good for traders with long deadlines, it is shown that in a model where all traders can choose which mechanism to use, no delay will be observed.
Purpose – To expand beyond existing research on the integration of supply chain and new product development that has a limited focused on the need to pre-inform supply chain before product launch, the need for new product development to consider the impact of product design on supply chain operations and research has focused on ensuring product availability at the product launch.
Design/methodology/approach – This research note suggests avenues forward and areas for practice and research to progress.
Findings – The existing and limited focus on involving supply chain in new product development overlooks several central issues and opportunities that companies are beginning to explore and that can be supported by research. In particular the opportunity to focus on leveraging supply chain in new product development, for greater market impact and revenue growth.
Practical implications – Addressing the path forward, beyond limited approaches requires greater alignment between new product development and supply chain, it requires a focus that goes beyond just ensuring product availability and it requires alignment much further upstream in the new product development process. Examples of early progress in companies are provided.
Originality/value – In addition to summarizing existing research, new avenues for research and practice are offered that can tremendously improve alignment and the contribution of supply chain on new product development, for the good of the company as a whole. Specific research areas are suggest to enable research to support the realization of the path forward in this area.
This paper identifies a practice turn in current strategy research, treating strategy as something people do. However, it argues that this turn is incomplete in that researchers currently concentrate either on strategy activity at the intra-organizational level or on the aggregate effects of this activity at the extra-organizational level. The paper proposes a framework for strategy research that integrates these two levels based on the three concepts of strategy praxis, strategy practices and strategy practitioners. The paper develops implications of this framework for research, particularly with regard to the impact of strategy practices on strategy praxis, the creation and transfer of strategy practices and the making of strategy practitioners. The paper concludes by outlining the distinctive emphases of the practice perspective within the strategy discipline.
This article examines three practices of strategising/organising – strategy workshops, the project management of strategic and organisational initiatives, and the creation of symbolic artefacts to communicate strategic change. These are seen through a practice theory lens that emphasises practical activity and the tight linkage between strategising and organising. The article argues that, in a world of accelerating change, approaching strategy and organisation as interlinked and practical activities is more effective than traditional static and detached approaches that, privilege analysis. As change drives repeated strategising/organising, it is mastery of the tools and procedures that matters, at least as much as the perfection of any transitory design. Drawing on a qualitative study of ten strategic reorganisations, the article analyses particular vignettes of strategy workshops, strategy projects and strategy artefacts in action. A common theme across all three practices is the importance of hands-on, practical crafting skills in getting strategising done. The article argues for a greater recognition of these kinds of craft skills in strategy, alongside traditional analytical skills, and addresses implications for practitioners and business schools. For practitioners, there is no need to reject formal strategy making, as some critics have proposed. Rather, practitioners can renew formal strategy by injecting craft directly into the process. Business schools, as managerial trainers for the strategy process, should extend both their research and their teaching. Strategy research needs to move beyond its traditional domain of economic analysis in order to understand the whole range of effective practices in strategising/organising work, drawing on close observation of what strategists actually do. Strategy teaching needs to bring the practicalities of strategising/organising work directly into the mainstream strategy curriculum, instead of marginalizing them into adjacent sub-disciplines such as consulting skills.
Most companies aim to identify different groups of attractive customers in order to offer them appropriate products and/or services. To do this, companies need
market segmentation. There is, however, a problem with the standard methods employed in market segmentation. The static inductive approach to market segmentation commonly employed by companies does little to identify buying intentions across demographic variables. A more dynamic and deductive approach to developing market segmentation is through analysis of consumer preference structures — an
approach which is glaringly absent from most marketing texts, not least because of the difficulty of developing practical approaches which can generate effective marketing
strategies. The purpose of this paper is to highlight and demonstrate a preference-based approach to market segmentation, using shoppers’ experience of online grocery retail brands in the UK. The paper first demonstrates how using choice-based conjoint analysis could help achieve this objective more effectively than other more traditional conjoint methods. While conjoint analysis is not new, the ability to segment based on markedly different preference structures is a recent development and comprises a powerful, but underutilised segmentation approach. A web-based methodology is then applied to build up a picture of consumers’ conscious and unconscious prioritisation of a large number of choice criteria. The paper calculates
consumer utility values for both offline and online shoppers in the UK and develops a preference-based segmentation approach, which is compared with a traditional
demographic segmentation approach using the same data. From this analysis, the advantages of a preference-based approach to segmentation are extracted. The paper
closes with recommendations for market research practitioners.
The purpose of this paper is to analyse the online preference structures of consumers. Novel choice-based conjoint experiments are used and are administered online. A select group of high net worth online grocery shoppers are examined. Both qualitative and quantitative procedures are used to determine the most frequently cited attributes affecting online patronage. Whilst there is no single attribute on which a retailer could develop a competitive edge, a significant market advantage can be gained by being simultaneously "best in class" on the top four attributes. This research approach has significant practical application to a wide range of strategic marketing questions. These findings give focus to the management task facing marketing executives in the UK multichannel grocery market. How these findings might be used within a marketing plan is illustrated.