|Up a level|
We respond to Jack Vromen’s (this issue) critique of our discussion of the missing micro-foundations of work on routines and capabilities in economics and management research. Contrary to Vromen, we argue that (1) inter-level relations can be causal, and that inter-level causal relations may also obtain between routines and actions and interactions; (2) there are no macro-level causal mechanisms; and (3) on certain readings of the notion of routines and capabilities, these may be macro causes.
We analyze financial support for the entrepreneurial sector. State support can raise welfare by relaxing financial constraints, but it can also reduce lending standards if entrepreneurs substitute public sources of collateral for their own assets, if it encourages excessive entrepreneurial entry, or if it undermines bank monitoring incentives. We derive a "pecking order" for support schemes: support funds should be channeled first to credit guarantee schemes and then, when entrepreneurs start to substitute public for private collateral, to co-funding entrepreneurial projects. The optimal level of credit guarantee is diminishing in the costs of incentivising bank monitoring. We show in an extension that the long-term effect of public subsidies may be to impair the private sector's initiative to uncover cost savings.
Because managers hold a status in society similar to that of doctors and lawyers, it is natural to think of business as a profession--and of business schools as professional schools. But, argues Barker, a professor at Cambridge University's Judge Business School, that can lead to inappropriate analysis and misguided perceptions. We turn to professionals for advice, he writes, because they have knowledge that we don't. We trust their advice because they've been guaranteed by professional associations that establish the boundaries of the field and reach consensus on what body of learning is required for formal training and certification. These associations make a market for professional services feasible. Although business schools might be able to reach consensus on what they should teach, the proper question is whether what they teach qualifies students to manage. After all, successful businesses are commonly run by people without MBAs. Managers' roles are inherently general, variable, and indefinable; their core skill is to integrate across functional areas, groups of people, and circumstances. Integration is learned in the minds of MBA students, whose experiences and careers are widely diverse, rather than taught in the content of program modules. Thus business education must be highly collaborative, with grading downplayed, and learning must differ according to the stage of a student's career. Business schools are not professional schools. They are incubators for business leadership.
This paper makes two contributions. First, it demonstrates that income and expenses are incorrectly defined in the IASB's conceptual framework, and it proposes alternative definitions. Second, the paper identifies that, in part as a consequence of these incorrect definitions, and in part because there are two, conflicting concepts of profit in IFRS, there is, first, no definition of profit in the Framework and, second, inconsistency and needless complexity in the concept of profit in IAS 1. The issues raised in this paper contribute to the current IASB projects on the conceptual framework and on financial statement presentation.
This paper addresses an important issue of presentation in the financial statements, namely the distinction between, on the one hand, the obligations and associated flows arising from the provision of finance to an entity ('financing') and, on the other hand, all other activities of the entity ('operating'). This operating‐financing distinction has been wellestablished in the finance literature since the work of Miller and Modigliani (1958, 1961) and is ubiquitous and of considerable importance in practice in financial markets (e.g. Koller et al., 2005; CFA Institute, 2005; Penman, 2006). Yet accounting standards are underdeveloped in this area, and there are gaps and inconsistencies in both IFRS and US GAAP. Drawing upon the distinction between nature and function in the presentation of financial statement information, the paper contributes, first, to enhance our theoretical understanding of the operating‐financing distinction, which is currently defined in different and unreconciled ways in the literature and, second, to propose a practical basis for accounting standard‐setters to determine requirements for the reporting of financing activity in the financial statements.
The European Union (EU) provides coordination and financing of trans-European transport infrastructures, i.e. roads and railways, which link the EU member states and reduce the cost of transport and mobility. This raises the question of whether EU involvement in this area is justified by inefficiencies of national infrastructure policies. Moreover, an often expressed concern is that policies enhancing mobility may boost tax competition. We analyze these questions using a model where countries compete for the location of profitable firms. We show that a coordination of investment in transport cost reducing infrastructures within union countries enhances welfare and mitigates tax competition. In contrast, with regard to union-periphery infrastructure, the union has an interest in a coordinated reduction of investment expenditures. Here, the effects on tax competition are ambiguous. Our results provide a rationale for EU-level regional policy that supports the development of intra-union infrastructure.
In this comment, we argue that — even in the model used by Hines — full deductibility of costs under the exemption system is incompatible with nationally optimal tax policy. We derive an optimality rule which suggests that cost apportionment rules are efficient. Our findings also imply that the exemption method should be accompanied by zero deductibility of costs if these costs are exclusively related to foreign source income.
The ongoing internationalization of business activity fuels concerns that governments may lose their ability to tax business income. By using data on sixteen German states from 1970 to 2005, we estimate the impact of internationalization, measured by trade volumes and stocks of foreign direct investment, on business tax revenues. We control for the impact of internationalization on business profits. Surprisingly, we find strong and robust evidence for a positive impact of internationalization on tax revenue. An increase in the internationalization indicator of ten percent increases tax revenue by over three percent. This counterintuitive result may be explained by higher tax avoidance activity of purely national firms or by legal provisions in the tax law which can be used as tax loopholes in the case of domestic transactions as opposed to cross-border transactions.
In this paper, we consider optimal tax enforcement policy in the presence of profit shifting toward tax havens. We show that, under separate accounting, tax enforcement levels may be too high due to negative fiscal externalities. In contrast, under formula apportionment, tax enforcement is likely to be too low due to positive externalities of tax enforcement. Our results challenge recent contributions arguing that, under formula apportionment, there is a tendency toward effficiently high levels of (effective) tax rates.
A large part of border crossing investment takes the form of international mergers and acquisitions. In this article, we ask how optimal repatriation tax systems look like in a world where investment involves a change of ownership, instead of a reallocation of real capital. We find that the standard results of international taxation do not carry over to the case of international mergers and acquisitions. The deduction system is no longer optimal from a national perspective and the foreign tax credit system fails to ensure global optimality. The tax exemption system is optimal if ownership advantage is a public good within the multinational firm. However, the cross-border cash-flow tax system dominates the exemption system in terms of optimality properties.
Would the introduction of a corporate tax system with consolidated tax base and formula apportionment lead to socially wasteful mergers and acquisitions across borders? This paper analyzes a two-country model with an international investor considering acquisitions of already existing target firms in a high-tax country and a low-tax country. The investor is able to shift profits from one location to another for tax saving purposes. Two systems of corporate taxation are compared, a system with separate accounting and a system with tax base consolidation and formula apportionment. It is shown that, under separate accounting, the number of acquisitions is inefficiently high in both the high tax and the low tax country. Under formula apportionment, the number of acquisitions is inefficiently high in the low tax country and inefficiently low in the high tax country. Under tax competition, a novel externality arises that worsens the efficiency properties of equilibrium tax rates under separate accounting, but may play an efficiency enhancing role under formula apportionment.
This paper explores the economic consequences of proposed EU reforms for a common consolidated corporate tax base. The reforms replace separate accounting with formula apportionment as a way to allocate corporate tax bases across countries. To assess the economic implications, we use a numerical computable general equilibrium (CGE) model for Europe. It encompasses several decision margins of firms such as marginal investment, FDI decisions, and multinational profit shifting. The simulations suggest that consolidation does not yield substantial welfare gains for Europe. The variation of effects across countries is large and depends on the choice of the apportionment formula. Consolidation with formula apportionment does not weaken incentives for tax competition. Tax competition instead offers a rationale for rate harmonization, in addition to base harmonization.
This article assesses the economic implications of the introduction of consolidation with formula apportionment in the European Union under alternative enhanced cooperation agreements. We find that the consolidation is likely to yield a small aggregate welfare gain in Europe, but that not all countries benefit. A coalition of winning countries reduces the welfare gain and may induce a process of adverse selection which distroys the possibility of cooperation. We find that a coalition of similar countries (in terms of the size of their multinational sector) is more feasible in achieving agreement and is actually preferred by those countries over a European-wide reform.
Various promising claims have been made that business can help alleviate poverty, and can do so in ways that add value to the bottom line. This article begins by highlighting that the evidence for such claims is not especially strong, particularly if business is thought of as a development agent, i.e. an organization that consciously and accountably contributes towards pro-poor outcomes. It goes on to ask whether, if we did know more about either the business case or the poverty alleviation case, would this give cause for greater optimism that business could make a significant contribution to development. By exploring the experiences of producers of Fairtrade tea in Kenya, we reveal the complex nature of what constitutes a beneficial outcome for the poor and marginalized, and the gap that can exist between ethical intentions and the experience of their intended beneficiaries. The lessons of these experiences are relevant for Fairtrade and any commercial initiative that seeks to achieve outcomes beneficial and recognizable to the poor, and raise questions about the integration of social and instrumental outcomes that a future generation of ethical entrepreneurship will need to address.
Using annual data for 75 countries in the period 1960–2000, we present evidence of a positive relationship between investment as a share of gross domestic product (GDP) and the long-run growth rate of GDP per worker. This result is robust for our full sample and for the subsample of non-OECD countries, but not for the subsample of OECD countries. Our analysis controls for time-invariant country-specific heterogeneity in growth rates, and for a range of time-varying control variables. We also address endogeneity issues, and allow for heterogeneity across countries in model parameters and for cross-section dependence.
We present new empirical evidence that aggregate capital accumulation is strongly influenced by the user cost of capital and, in particular, by corporate tax incentives summarised in the tax-adjusted user cost. We use sectoral panel data for the USA, Japan, Australia and ten EU countries over the period 1982-2007. Our panel combines data on capital stocks, value-added and relative prices from the EU KLEMS database with measures of effective corporate tax rates from the Oxford University Centre for Business Taxation. Given the tax-adjusted user cost, we find little additional information in statutory corporate tax rates or effective average tax rates.
This paper investigates the relative performance of enterprises backed by government-sponsored venture
capitalists and private venture capitalists. While previous studies focus mainly on investor returns,
this paper focuses on a broader set of public policy objectives, including value-creation, innovation,
and competition. A number of novel data-collection methods, including web-crawlers, are used to
assemble a near-comprehensive data set of Canadian venture-capital backed enterprises. The results
indicate that enterprises financed by government-sponsored venture capitalists underperform on a
variety of criteria, including value-creation, as measured by the likelihood and size of IPOs and M&As,
and innovation, as measured by patents. It is important to understand whether such underperformance
arises from a selection effect in which private venture capitalists have a higher quality threshold for
investment than subsidized venture capitalists, or whether it arises from a treatment effect in which
subsidized venture capitalists crowd out private investment and, in addition, provide less effective
mentoring and other value-added skills. We find suggestive evidence that crowding out and less effective
treatment are problems associated with government-backed venture capital. While the data does not
allow for a definitive welfare analysis, the results cast some doubt on the desirability of certain government
interventions in the venture capital market.
When China acceded to World Trade Organization (WTO) in 2001, there were fears that Chinese firms would lose market share in key sectors to foreign-invested enterprises (FIEs). Although aggregate data often indicate a shift in favor of FIEs, indigenous firms in many cases have slowly increased market share and deepened their technical capabilities. Through an analysis of aggregate data and three sectors, we show how the dynamics of competition between Chinese and FIEs in China’s domestic market enhance the upgrading prospects for Chinese firms. China represents a new model of development in several important respects: industrialupgrading efforts are often domestically driven, within this domestic market there is intense competition between both domestic and foreign firms, and this competition is driving and stimulating the upgrading efforts of domestic firms.
Researchers, practitioners and enterprise software providers are realising the potential of agent-based technology to automate supply chain procurement to achieve consistent, traceable decision making. As the complexity of supply chains grow, these systems will gain more attention. In this paper, we model and simulate a complex autonomous supply chain managed by computational agents that aim to minimise lead time and maximise revenue through evolutionary multi-objective optimisation. The agents are in a competitive environment where they take on the roles of both client and producer. In addition to optimising their production strategy, they also have the opportunity to dynamically fine-tune their decision parameters when it comes to selecting their own suppliers, using the Analytical Hierarchy Process. It is observed that computational agents are capable of functioning in such complex environments, effectively converging to policies in synergy with their market. Multi-objective, multi-role optimisation results in better overall supply chain performance than tests where agents have single-objectives and single-roles. Our study forms an exploratory step towards more realistic agent-based supply chains where analytical methods are unavailable.
Although RFID is seen by many as a revolutionary enabler of automated data capture, confusion still remains as to how manufacturing organisations can identify cost-effective opportunities for its use. Managers view promotional business case estimates as unjustified, simulation based analysis and analytical models as secondary modes of analysis, and case studies are scarce. Further, there is a lack of simple tools to understand how RFID can help to achieve a leaner manufacturing environment, after the use of which practitioners can be routed to grounded forms of analysis. The purpose of this paper is to provide and test such a toolset, which uses the seven Toyota Production System wastes as a template. In our approach, RFID technology is viewed as a vehicle to achieve leaner manufacturing through automated data collection, assurance of data dependencies, and improvements in production and inventory visibility. The toolset is tested on case examples from two push-based, multi-national fast moving consumer goods manufacturing companies. The opportunity analysis is shown to identify not only initially suspected areas of improvement, but also other areas of value.
Industrialists have few example processes they can benchmark against in order to choose a multi-agent development kit. In this paper we present a review of commercial and academic agent tools with the aim of selecting one for developing an intelligent, self-serving asset architecture. In doing so, we map and enhance relevant assessment criteria found in literature. After a preliminary review of 20 multi-agent platforms, we examine in further detail those of JADE, JACK and Cougaar. Our findings indicate that Cougaar is well suited for our requirements, showing excellent support for criteria such as scalability, persistence, mobility and lightweightness.
Managing large-scale transportation infrastructure projects is difficult due to frequent misinformation about the costs which results in large cost overruns that often threaten the overall project viability. This paper investigates the explanations for cost overruns that are given in the literature. Overall, four categories of explanations can be distinguished: technical, economic, psychological, and political. Political explanations have been seen to be the most dominant explanations for cost overruns. Agency theory is considered the most interesting for political explanations and an eclectic theory is also considered possible. Non-political explanations are diverse in character, therefore a range of different theories (including rational choice theory and prospect theory), depending on the kind of explanation is considered more appropriate than one all-embracing theory.
Lock-in, the escalating commitment of decision makers to an ineffective course of action, has the potential to explain the large cost overruns in large-scale transportation infrastructure projects. Lock-in can occur both at the decision-making level (before the decision to build) and at the project level (after the decision to build) and can influence the extent of overruns in two ways. The first involves the ‘methodology’ of calculating cost overruns according to the ‘formal decision to build’. Due to lock-in, however, the ‘real decision to build’ is made much earlier in the decision-making process and the costs estimated at that stage are often much lower than those that are estimated at a later stage in the decision-making process, thus increasing cost overruns. The second way that lock-in can affect cost overruns is through ‘practice’. Although decisions about the project (design and implementation) need to be made, lock-in can lead to inefficient decisions that involve higher costs. Sunk costs (in terms of both time and money), the need for justification, escalating commitment, and inflexibility and the closure of alternatives are indicators of lock-in. Two case studies, of the Betuweroute and the High Speed Link-South projects in the Netherlands, demonstrate the presence of lock-in and its influence on the extent of cost overruns at both the decision-making and project levels. This suggests that recognition of lock-in as an explanation for cost overruns contributes significantly to the understanding of the inadequate planning process of projects and allows development of more appropriate means.
The widely used support vector machine (SVM) method has shown to yield very good results in supervised classification problems. Other methods such as classification trees have become more popular among practitioners than SVM thanks to their interpretability, which is an important issue in data mining.
In this work, we propose an SVM-based method that automatically detects the most important predictor variables and the role they play in the classifier. In particular, the proposed method is able to detect those values and intervals that are critical for the classification. The method involves the optimization of a linear programming problem in the spirit of the Lasso method with a large number of decision variables. The numerical experience reported shows that a rather direct use of the standard column generation strategy leads to a classification method that, in terms of classification ability, is competitive against the standard linear SVM and classification trees. Moreover, the proposed method is robust; i.e., it is stable in the presence of outliers and invariant to change of scale or measurement units of the predictor variables.
When the complexity of the classifier is an important issue, a wrapper feature selection method is applied, yielding simpler but still competitive classifiers.
The article discusses the views of Allan Fehls, former chairman of Australia Competition and Consumer Commission, about developing good retail competition policy. He says that there have been public policy and political issues around retailing all over the world. He emphasizes the high degree of concentration with the retail sector within developing economies due to the financial crisis. He stresses that are endless demands for the government and policy-makers to do something about retail.
This paper examines the trajectory of pay-performance-sensitivity (PPS) in the years immediately after chief executive officers (CEOs) assume their positions. We show that PPS “steady state equilibrium” is not achieved overnight, but instead evolves through a process whereby CEO incentives increase gradually before eventually leveling off. We discuss various factors that might contribute to these observed dynamics including liquidity constraints, career concerns, entrenchment, survivor bias, and learning of the CEOs’ true abilities. We find strong support for some, but only mixed support for others. Because median CEO tenure in ExecuComp is eight years, our results suggest that this dynamics of incentive accumulations over a CEO’s tenure cannot be ignored when studying executive incentive schemes. Finally, we show this gradual adjustment of PPS over a CEO’s tenure to have a meaningful impact on firm valuation, as measured by Tobin’s Q.
This study compares the discourses around food and the family in popular ‘women’s magazines’ in the UK and Australia in the post World War 2 period.
Advertisements for food, editorials and articles on nutrition, food, family and healthy eating are examined and analysed to map the discursive production of the consumer citizens within the regulatory device of the nuclear family over this period.
Successfully predicting that something will become a big hit seems impressive. Managers and entrepreneurs who have made successful predictions and have invested money on this basis are promoted, become rich, and may end up on the cover of business magazines. In this paper, we show that an accurate prediction about such an extreme event, e.g., a big hit, may in fact be an indication of poor rather than good forecasting ability. We first demonstrate how this conclusion can be derived from a formal model of forecasting. We then illustrate that the basic result is consistent with data from two lab experiments as well as field data on professional forecasts from the Wall Street Journal Survey of Economic Forecasts.
The globalisation of economic activity and the growing importance of multinational corporations have far-reaching consequences for national tax policies. Since 1995, the average corporate tax rate in the EU has fallen from 35% to 23%. In addition, differences and incompatibilities between the national systems of corporate income taxation distort investment, complicate the tax system and give rise to conflicts between taxpayers and tax authorities as well as between tax authorities of different countries. Given this, there is a widespread view that greater coordination of corporate taxation is required. Recently, the European Commission proposed introducing a Common Consolidated Corporate Tax Base (CCCTB) in Europe. This article discusses the economic advantages and the drawbacks of the CCCTB concept.
We propose a methodology for assessing the neutrality of corporate tax reform proposals in an open economy. The methodology identifies variation in effective tax rates to assess the proximity of a tax system to capital export neutrality (CEN) and to market neutrality (MN, which holds if all potential competitors in a single market face the same effective tax rate). We apply the methodology to two reform options in the EU. Optional international loss consolidation would move the EU tax system away from both CEN and MN. The proposed common consolidated corporate tax base (CCCTB) has mixed effects which depend on the precise comparisons made.
This paper stresses the special role of multinational headquarters in corporate profit shifting strategies. Using a large panel of European firms, we show that multinational enterprises (MNEs) are reluctant to shift profits away from their headquarters even if these are located in high-tax countries. Thus, shifting activities in response to corporate tax rate differentials between parents and subsidiaries are found to be significantly larger if the parent observes a lower corporate tax rate than its subsidiary and profit is thus shifted towards the headquarters firm. This result is in line with recent empirical evidence suggesting that MNEs bias the location of profits and highly profitable assets in favor of the headquarters location (for agency cost reasons among others).
Fairtrade was founded to alleviate poverty and economic injustice through a market-based form of solidarity exchange. Yet with the increasing participation of transnational food corporations in Fairtrade sourcing, new questions are emerging on the extent to which the model offers an alternative to the inimical tendencies of neoliberalism. Drawing on a qualitative research project of Kenyan Fairtrade tea, this paper examines how the process of corporate mainstreaming influences the structure and outcomes of Fairtrade, and specifically the challenges it poses for the realization of Fairtrade’s development aspirations. It argues firstly that whilst tea producers have experienced tangible benefits from Fairtrade’s social premium, these development ‘gifts’ have been conferred through processes marked less by collaboration and consent than by patronage and exclusion. These contradictions are often glossed by the symbolic force of Fairtrade’s key tenets – empowerment, participation, and justice – which simultaneously serve to neutralize critique and mystify the functions that Fairtrade performs for the political economy of development and neoliberalism. Second, building on recent critiques of corporate social responsibility, the paper explores how certain neoliberal rationalities are emboldened through Fairtrade, as a process of mainstreaming installs new metrics of governance (standards, certification, participation) that are at once moral and technocratic, voluntary and coercive, and inclusionary and marginalizing. The paper concludes that these technologies have divested exchange of mutuality, as the totemic features of neoliberal regulation – standards, procedures and protocols – increasingly render north south partnerships ever more virtual and depoliticized.
Purpose – This paper seeks to bring together ethical governance theory and empirical findings to examine the shifting nature of governance in global value chains, and the implications of this shift for mainstream companies. In particular, it aims to examine one of the more mature models of ethical value chain governance, Fairtrade, and how this is being used by business.
Design/methodology/approach – Information is derived from a longitudinal study of multi-stakeholder co-governance in Kenya and the UK, and an analysis of the literature on similar co-governance models.
Findings – The paper shows that mainstream companies are looking to multi-stakeholder models not only to protect their reputation, but as a way of governing ethical dimensions of their value chains. However, rather than a form of co-governance, it has become a way of outsourcing governance, enabling companies to strengthen their public credibility, while simultaneously transferring an especially difficult element of modern value chain governance to organizations enjoying high consumer trust. Yet, primary data suggest that these governance systems are not delivering the benefits promised, at least at the producer level.
Practical implications – By outsourcing governance to initiatives with dubious credibility in this way, companies may seem at risk. However, the mismatch between the promise and delivery of Fairtrade does not seem to be affecting consumer confidence and, until it does, companies may continue to benefit from the halo effect of being a Fairtrade ally. But there are also opportunities for companies to use Fairtrade's weaknesses to make the value chain a better avenue for delivering ethical governance, with implications for similar co-governance models.
Originality/value – The study draws on one of the very few pieces of longitudinal field research on the impacts of Fairtrade. It approaches Fairtrade from a governance rather than reputations perspective, and emphasizes the implications for mainstream business rather than the co-governance movement.
Globalisation and global challenges demand new governance models. The perceived success of the value chain as a governance mechanism for delivering better social and environmental outcomes has led a growing number of mainstream companies to incorporate brands associated with initiatives such as Fairtrade, the Forest Stewardship Council, and Rainforest Alliance into their procurement and marketing strategies. By buying from ethically certified producers and selling ethically labelled products, retailers and manufacturers are effectively outsourcing a significant part of their supply chain governance to third parties considered to have greater moral credibility than the companies themselves.
This paper explores the implications of such outsourcing for the companies concerned, and in particular the dangers to both corporate reputation, and to the wider credibility of alternative governance models. Drawing on empirical data from a longitudinal study of Kenyan communities producing for Fairtrade, and situating this within debates about voluntary self-regulation, value chain governance, and international development, the paper details how Fairtrade initiatives have been adopted as part of a governance outsourcing strategy, and the extent to which they are able to helps companies meet their societal responsibilities. The paper concludes with a discussion of the lessons for corporate strategy and the management of governance issues. The paper brings together ethical governance theory and empirical findings to examine the shifting nature of governance in global value chains, and the implications of this shift for mainstream companies. In particular, it examines one of the more mature models of ethical value chain governance, Fairtrade, and how this is being used by business.
The education of girls is a primary focus of development efforts in poor countries because female achievement, especially at the secondary level, is believed to have long-lasting and far-reaching economic effects. Complex multiple factors work against girls’ education in developing countries, including entrenched beliefs and practices that devalue female education. However, one simple contributing factor has recently been suspected of having an impact on girls’ remaining in school: poor girls often have no access to sanitary products and, as a result of feared embarrassment, attend irregularly, perform poorly, and then drop out.
This article focuses on the diffusion and adoption of innovations in clinical practice. The authors are specifically interested in underresearched questions concerning the latter stages of the creation, diffusion, and adoption of new knowledge, namely: What makes this information credible and therefore utilized? Why do actors decide to use new knowledge? And what is the significance of the social context of which actors are a part?
Institutional theory has energized a large and vibrant academic community, but it is largely unknown to managers and inconsequential with respect to the management of organizations. This is despite what the authors believe is an immense potential practical contribution. In this article, the authors suggest that institutional theory needs a gap year—a period in which core frameworks and insights from an institutional perspective are brought into contact with complex social problems. The authors focus on the study of institutional work and argue that an extended encounter with the world of participatory action research could lead to new answers to key questions and energize the development of institutional theory as both an academic and a practical project.
As we write this chapter, in the autumn of 2008, the US financial sector is in crisis – major investment banks have gone bankrupt, others have lost most of their market value, and the US Congress is considering a bailout of some 700 million US dollars (The Economist, 2008). The situation represents a clear case of the extraordinary potential for breakdown in social systems that depend on complex layers of technology and institutionalized practice – a “logistical nightmare of fixing a market whose complexity is central to the crisis” (The Economist, 2008, p. 81). More generally, we argue that it challenges prevailing images of technology and institutions as stabilizing forces and points to the fundamentally important, but often neglected, work of maintaining technology and institutions.
The aim of this paper is to establish whether jurisprudence of the European Court of Justice (ECJ) on corporate tax leads to a more level playing field and increased tax neutrality within the European Internal Market. It uses the ruling on Lankhorst-Hohorst regarding the compatibility of thin capitalisation with free movement provisions as a case study to demonstrate how the jurisprudence of the Court does not necessarily lead to increased neutrality or a more level playing field. An economic analysis demonstrates that, depending on the reaction of Member States to the ruling, differences in capital costs faced by firms operating in the European Internal Market may increase, whilst GDP and welfare may decrease. Consideration of actual legislative amendments introduced to thin capitalisation rules by Member States following Lankhorst-Hohorst seems to indicate that it is this negative scenario which has prevailed. The paper considers the European constitutional implications of this conclusion.
On February 2006 the Court of Justice delivered its eagerly awaited ruling in Halifax, a case concerning the interpretation of EU secondary legislation on VAT. The judgment represented the culmination of a long process, with the Court referring for the first time to the 'principle of prohibiting abusive practices'. Yet, Halifax also represented the beginning of a new process: the discussion over the significance of the newly designated 'principle of prohibition of abuse of law'. Fundamental questions immediately arose and were the subject of intensive debate such as: the scope of application of the principle - would it apply to other areas of tax law, in particular to corporate taxation; the criteria for application of the principle – how would the abuse test set out by the Court in Halifax be applied; and, the nature and implications of this 'principle' - interpretative, general, or neither.
This brief paper considers the ongoing debate over the development of an EU principle of prohibition of abuse of law, offering some further thoughts on the topic. In particular, the paper reflects first on the role of the principle within the field of free movement of persons, in the context of the literature on convergences and divergences between the fundamental freedoms. It then progresses to propose the notion of reverberation as a new conceptual framework for the analysis of the development of general principles of EU law - in this case based upon the ongoing example of prohibition of abuse of law. The paper concludes with some thoughts on the future of the principle of prohibition of abuse of law and its potential consequen
The aim of this paper is to establish whether jurisprudence of the Court of Justice of the European Union (CJ) on corporate tax leads to a more level playing field and increased tax neutrality within the European Internal Market. It uses two rulings as case studies to demonstrate
how the jurisprudence of the Court does not necessarily lead to increased neutrality or a more level playing field. The first ruling in Lankhorst-Hohorst regards the compatibility of thin capitalisation with free movement provisions; the second in Marks & Spencer concerns the
compatibility of rules on group consolidation with those same provisions. An economic analysis demonstrates that, depending on the reaction of Member States to the ruling, differences in capital costs faced by firms operating in the European Internal Market may increase, whilst GDP and welfare may decrease. Consideration of actual legislative amendments introduced to thin capitalisation rules by Member States following Lankhorst-Hohorst and to group consolidation rules following Marks & Spencer seem to indicate that it is this negative scenario which has
prevailed. The paper considers the European constitutional implications of this conclusion.
This paper provides a legal and economic analysis of the European Commission's recent proposals for reforming the application of VAT to financial services, with particular focus on their ‘third pillar’, under which firms would be allowed to opt in to taxation on exempt insurance and financial services. From a legal perspective, we show that the proposals’‘first and second pillars’ would give rise to considerable interpretative and qualification problems, resulting in as much complexity and legal uncertainty as the current regime. Equally, an option to tax could potentially follow significantly different legal designs, which would give rise to discrepancies in the application of the option amongst Member States of the European Union (EU). On the economic side, we show that quite generally, when firms cannot coordinate their behaviour, they have an individual incentive to opt in on business-to-business (B2B) transactions, but not on business-to-consumer (B2C) transactions. We also show that opting-in eliminates the cost disadvantage that EU financial services firms face in competing with foreign firms for B2B sales. But these results do not hold if firms can coordinate their behaviour. An estimate of the upper bound on the amount of tax revenue that might be lost from allowing opting-in is provided for a number of EU countries.
In most countries, profit taxation is probably much more relevant nowadays than trade liberalisation when it comes to firm-level decisions about investment. Empirically, firms are quite heterogeneous with regard to fixed costs: the composition of assets (tangible versus intangible; machinery versus buildings; etc.) and the financing of investments. Then, even uniform changes in profit tax instruments cause heterogeneous responses of firm-level effective tax rates and, hence, after-tax profits. We argue that, with similar profit margins, firms would then require pre-tax profits to differ as well. Governments change statutory profit tax rates and, by virtue of firms’ heterogeneity, they cause stark selection effects which are mainly related to heterogeneous fixed rather than variable costs. We compute costs of capital for a large sample of firms to illustrate how homogeneous changes in tax instruments hit firms differently. Using Bureau van Dijk’s ORBIS database, we illustrate that the effects of changes in statutory instruments have relatively large systematic variance components across industries within countries and also relatively large ones of firms within industries and countries.
This article describes the tax regimes for corporations in the Asia-Pacific, India and Russia and assesses the effective tax burden on the domestic and cross-border of German and US investors. The calculation of effective tax burdens is based on the renowned methodology of Devereux/Griffith. This approach allows condensing the most relevant provisions of tax regimes into a broadly accepted tax burden measure. The results help evaluate the attractiveness of the respective investment locations from a tax perspective and identify the most relevant tax drivers.
Considerable attention has focused on how multinational corporations (MNCs) deal with the simultaneous pressures of globalization and localization when it comes to human resource management (HRM). HR function activities in this process, however, have received less focus. The study presented here identifies configurations of the corporate HR function based on international HRM (IHRM) structures, exploring how issues of interdependency shape corporate HR roles. The study is based on 248 interviews in 16 MNCs based in 19 countries. The findings are applied to develop a contextually based framework outlining the main corporate HR function configurations in MNCs, including new insights into methods of IHRM practice design.
This paper analyzes certain policies that are typical of a number of rapidly growing East Asian countries in which a fixed exchange rate, combined with a surplus labor market, has made domestic assets relatively inexpensive, generating high rates of FDI as well as domestic capital formation. This “investment hunger” can lead to unanticipated declines in the returns to investment, and resulting financial insolvencies. Private consumption remains low and there are concerns that high savings rates cannot be sustained.
We construct a dynamic general equilibrium model and apply it to a stylized Asian economy, loosely based upon China. We calibrate a benchmark equilibrium, and carry out various counterfactual simulations to analyze alternative policies, in particular tax cuts and exchange rate revaluations, as instruments in increasing private consumption while avoiding bank failures.
Explores aspects of the Government's declared approach to tax policy with reference to the discussion document, "Tax Policy Making: a new approach", published by HM Treasury and HMRC in June 2010. Considers the possible enactment of three Finance Acts in one year, with the two Finance Acts enacted so far in 2010 and a third Bill expected in the autumn. Looks at tax consultations, and two reviews being conducted by the Office of Tax Simplification.
This article revisits the arguments made by John Avery Jones in 1996 for “less detailed legislation interpreted in accordance with principles.” His plea has been influential but perhaps not completely understood. Recent developments in the UK, particularly in the area of so-called “principles-based drafting” may not have assisted in promoting this cause. It is frequently argued that real improvements in tax law require a coherent underlying policy, not just drafting changes. Obviously, drafting techniques will not cure a poorly structured tax and so ideally the starting point would be improved policy. But we cannot afford to wait for a total policy overhaul. A different approach to the way we legislate could both improve the way we think about policy and result in better implementation, application and legitimacy in decision making. Principles-based drafting is not a solution to all ills. Nevertheless, it could offer one route, in appropriate cases, to improvement as well as, in other cases, highlighting the need for more fundamental reform. We should not give up this experiment simply because it has not yet delivered total success. No new drafting technique can deliver a perfect tax system, but it is worth persevering with principles-based legislation.
To operate efficiently and effectively revenue authorities require discretion, but processes must be in place to keep discretion in check. This delicate balancing act takes place against the background of a more general constitutional framework. This working paper starts by outlining the unique features of the UK constitution that form the background to the way in which the discretion of the UK revenue authorities (HMRC) is assessed and controlled. It then discusses a limited, yet crucial, set of discretions vested in HMRC. The paper focuses particularly on the use of non-statutory guidance. It discusses the operation of judicial review, and in particular, the doctrine of legitimate expectations, in the context of such guidance. It then presents two case studies, which reveal a distinct uncertainty over the limits of non- statutory guidance. This is currently of considerable concern to the UK tax community. As the role of guidance appears to be increasing in the UK system, the case for addressing some of these causes of uncertainty strengthens. The paper concludes by offering some preliminary suggestions on how the problems may be addressed.
This paper analyses the effectiveness of the corporate income tax as an automatic stabilizer. It employs a unique firm-level data set of German manufacturers combining financial statements with firm-specific information about credit market restrictions. The results show that approximately 20 per cent of all firms report both positive taxable income and capital market restrictions. Taking account of the income tax rates and the size differences of the firms, we find that demand stabilization through the corporate income tax amounts to about 8 per cent of an initial shock to gross revenues. This stabilization effect varies over the business cycle and tends to increase during cyclical downturns.
How do different components of the tax and transfer systems affect disposable income inequality? This article explores the redistributive effects of different tax benefit instruments in the enlarged European Union (EU) based on two approaches. Inequality analysis based on the sequential accounting approach suggests that benefits are the most important factor reducing inequality in the majority of countries. The factor source decomposition approach, however, suggests that benefits play a negligible role and sometimes even contribute slightly positively to inequality. On the contrary, here taxes and social contributions are by far the most important contributors to income inequality reduction. The authors explain these partly contradictory results with the different normative focus of the two approaches and show that benefits have other aims than redistribution.
In the debate on the impact of illicit capital flows on developing countries, the view is widespread that profit shifting to low tax jurisdictions undermines the ability of developing countries to raise tax revenue. While the shifting of income out of developed countries is a widely debated issue, empirical evidence on the magnitude of the problem and on the factors driving income shifting is scarce. This paper reviews the literature on tax avoidance and evasion through border crossing income shifting out of developing countries. Moreover, we discuss methods and available datasets which can be used to gain new insights into the problem of corporate income shifting. We argue that results of many existing studies on tax avoidance and evasion in developing countries are difficult to interpret, mainly because the measurement concepts used have a number of drawbacks. We discuss some alternative methods and datasets and present some empirical evidence which supports the view that profit shifting out of many developing countries and into tax havens takes place.
We define a non-tâtonnement dynamics in continuous-time for pure-exchange economies with outside and inside fiat money. Traders are myopic, face a cash-in-advance constraint and play dominant strategies in a short-run monetary strategic market game involving the limit-price mechanism. The profits of the Bank are redistributed to its private shareholders, but they can use them to pay their own debts only in the next period. Provided there is enough inside money, monetary trade curves converge towards Pareto optimal allocations; money has a positive value along each trade curve, except on the optimal rest-point where it becomes a veil while trades vanish. Moreover, generically, given initial conditions, there is a piecewise globally unique trade-and-price curve not only in real, but also in nominal variables. Finally, money is locally neutral in the short-run and non-neutral in the long-run.
We study the choice of bank nationality by foreign-born retail banking customers in the context of bank globalization. We argue theoretically that banks enter foreign markets to follow their non-corporate customers and thereby are able to exploit competitive advantages over domestic banks. Using detailed survey data on more than 1000 Turkish immigrants in Germany, we find that product differentiation explains the choice of a home nation bank and that ethnic origin in itself provides the strongest comparative advantage for foreign banks. We find evidence for a persistent ‘home-bias’ of customers with an immigration background even with increasing integration into the host country's culture. This result may be surprising given that a systematic difference in the choice of bank nationality should not be observable with more integrated immigrants. Our results contribute to the existing economic research on multinational bank expansion by providing insights into bank globalization as an accompaniment to labor market internationalization.
Until recently, financial services regulation remained largely segmented along national lines. The integration of financial markets, however, calls for a systematic and coherent approach to regulation. This paper studies the effect of market based regulation on the proper functioning of the interbank market. Specifically, we argue that restrictions on the payout of dividends by banks can reduce their expected default on (interbank) loans, stimulate trade in this market and improve the welfare of consumers.
This article analyses the demand side of business services outsourcing. Using insights from a number of bodies of literature, the article interprets business services outsourcing as corporate restructuring involving the administrative functions of the firm. Propositions are developed around the following. First, the existing corporate structure and the nature of supplier markets affect the paths chosen to create shared business services and to move to outsourcing. Second, the trajectory of the move to shared services and outsourcing affects the distribution of capabilities between users and suppliers. The study examines these propositions through a comparison of human resource outsourcing in two leading consumer products companies, Procter & Gamble (P&G) and Unilever. We find that a relatively high degree of centralization at P&G led it to create an internal shared services center before outsourcing, whilst a more decentralized Unilever utilized outsourcing as an occasion for globally standardizing its systems and processes. The article draws implications of these different paths for core capabilities.
The response of an ecosystem to perturbations is mediated by both antagonistic and facilitative interactions between species. It is thought that a community's resilience depends crucially on the food web—the network of trophic interactions—and on the food web's degree of compartmentalization. Despite its ecological importance, compartmentalization and the mechanisms that give rise to it remain poorly understood. Here we investigate several definitions of compartments, propose ways to understand the ecological meaning of these definitions, and quantify the degree of compartmentalization of empirical food webs. We find that the compartmentalization observed in empirical food webs can be accounted for solely by the niche organization of species and their diets. By uncovering connections between compartmentalization and species' diet contiguity, our findings help us understand which perturbations can result in fragmentation of the food web and which can lead to catastrophic effects. Additionally, we show that the composition of compartments can be used to address the long-standing question of what determines the ecological niche of a species.
Purpose – The purpose of this paper is to understand the gender-related challenges of Pakistani women entrepreneurs, to explore these women's particular capacity-building needs, and to assess the impact of capacity-building programs on the establishment and performance of the women's enterprises.
Design/methodology/approach – The paper begins with a review of various theoretical contexts through which to understand women's entrepreneurship in an Islamic socio-cultural context. From this, the paper derived two working propositions: women in Islamic Pakistan face particular barriers to becoming entrepreneurs; these barriers can be reduced by women-only training in entrepreneurial competences. These propositions are examined in a three-part longitudinal process: a field survey to gather information about the training needs of current and potential women entrepreneurs, the design and delivery of a women-only training module, a follow-up survey with participants, 18 months later. Subjects and participants were randomly selected, and segmented according to entrepreneurial factors and characteristics.
Findings – Results confirm that the barriers perceived by women entrepreneurs in Islamic Pakistan can be alleviated through women-only training that allows participants to develop capital and competences. Greater clarity about learning outcomes desired and achieved by women entrepreneurs in an Islamic socio-cultural context can be a basis for designing improved training and education programmes, with a view to women's economic empowerment.
Practical implications – For women entrepreneurs living in an Islamic society, this analysis has implications for understanding the importance and effectiveness of entrepreneurial training especially in a women-only setting. For policy makers, it turns the spotlight on the need for creating an environment conducive to female entrepreneurship consistent with socio-cultural structures and gender asymmetries.
Originality/value – There are no comparable previous data on the learning preferences and outcomes of this particular
This article uses two case studies of family-owned firms to assess the organizational capabilities necessary for survival under conditions of environmental volatility. Both organizations belong to the edible oil industry and are among Argentina’s leading oilseed processors and exporters. The most-adaptable firm undertook transformations involving continuous change, while the less-adaptable firm displayed a more revolutionary attempt at transformation. The outcome of the transformation process in the most-adaptable firm does not conform to patterns portrayed in the literature on adaptation and change, which involve long periods of stability or convergence and short periods of revolutionary change. We find that the life-cycle of family firms, the role of the founder, control systems and the professionalization of the management team, and ownership issues most strongly influence the capabilities of these firms.
The objective of the BC Angel on-line survey was to gather data on angel investing and to analyze the BC angel investment community, individual and organized investors, their composition (diversity), motivations, investment preferences and modus operandi. The survey also focused on individual and organized angel investors who invest in companies that qualify for the tex incentive schemes of the BC government. The on-line survey was analysed to provide further insight into the economic effects of the BC's Equity Capital Program (ECP), best practices within the program and to provide suggestions for program improvement.
The objective of this study is to evaluate the economic impact of the venture capital program (VCP) in the province of British Columbia. The study focuses on the economic and financial performance of the companies in the program, including a comparison of the tax credits received versus the taxes paid by these companies.
Alfred Chandler attributed the rise of the vertically integrated corporation in the twentieth century to improvements in transportation and communication. In contrast, many have argued that further advances in transportation and communication have made vertical integration obsolete in recent years, replacing it with modularity, outsourcing, and networking. This article unpacks this apparent puzzle by regarding technological improvements in transportation and communication as theoretically neutral with respect to the degree of vertical integration. We argue that the key concepts and issues in supply chain management that Chandler highlighted remain highly relevant today. We integrate Chandler’s detailed historical perspective on the evolution of the “visible hand” of managerial governance with more recent theories from organization economics and from engineering, yielding the following insights. First, aligning incentives of buyers and suppliers is important in achieving throughput and assured supply, but asset ownership is neither necessary nor sufficient for this. Second, vertical integration (and disintegration) decisions affect the internal operation of the firm and its future path. Third, firms need to design their value chains in such a way as to achieve coordination without information overload. The article demonstrates the continuing power of these insights in three phases over the last century. In the first phase (with the rise of mass production), Chandler himself noted a subtle array of make-and-buy decisions. In the second phase (with the rise of lean production), several varieties of non-integration (e.g. exit vs. voice) persisted because of the specific ways in which firms combined incentive alignment and information flow. In the third (“New Economy”) phase, management of information and material flows through the supply chain remains an important source of competitive advantage. In particular, disintermediation as a form of vertical integration, and successful outsourcing require investment in technical expertise over a wide range of technological fields and coordination of knowledge to manage suppliers (see for example, Brusoni and Prencipe, 2001; Brusoni, 2003; Clark and Fujimoto, 1991). By noting that neither externalization through outsourcing nor flattening of managerial hierarchy is the same as decentralization, the article provides theoretical and empirical bases for the continuing importance of Chandler’s principles in managing supply chains.
The sociology of work has treated the assistant role in a limited way: it has been seen as a ‘cheap’ source of labour or as a convenient ‘dumping ground’ for the delegation of routine tasks as ‘superordinate’ occupations seek to professionalise. This perspective is reflected in public policy as it relates to the healthcare assistant (HCA), the role being viewed, within the context of NHS modernisation, as a flexible source of labour with scope to ‘free up’ the nurse. Policy makers have, however, tempered this view with suggestions that the role has a distinctive contribution to make to healthcare quality. Implicitly this is seen to lie in the structure of the role and in those filling it, making care more accessible to the patient. Nonetheless, there are policy risks associated with the increased use of the HCA: the role remains unregulated, with the possibility that patients regard treatment by HCAs as a diminution rather than enhancement of care quality. This paper addresses whether the HCA enhances and/or diminishes the patient care experience in a hospital setting. Drawing on interview, survey, focus group and observational data from four hospitals, it explores the HCA patient relationship from the perspective of three stakeholders: the HCA, the patient and the nurse. These data provide a strong evidence base for the suggestion that HCAs develop a distinctive and ‘positive’ relationship with patients. However, perceptions of this relationship display some contradiction and suggest that any ‘positive’ contribution might be contingent on certain management practices.
Debates on emotion at work have long acknowledged an interest in both the management of the workers’ emotions and those of the customer with whom the worker interacts. The weight of interest has, however, more consistently fallen on the former. Hochschild’s conception of the alienating nature of emotional labour encouraged a research stream on its negative psychological consequences for the worker, while an influential line of work was stimulated by Bolton’s identification of different normative regimes regulating employee emotion management. This paper seeks to redress the balance, exploring the part played by a workplace role, the healthcare assistant (HCA), in managing the emotions of service users, in this case hospital patients. It argues that the HCA role might be seen to have contradictory effects on the management of patient emotions. The role might be seen as being particularly effective in providing emotional support for patients: the tacit skills of post holders being useful in this respect. However, as a largely unregulated role, it might equally be seen to generate patient anxiety and stress. Drawing on interview, survey, focus group and observational data from four hospitals, this paper explores these outcomes from the perspective of three stakeholders: the HCA, the patient and the nurse. These data suggest that HCAs bring to healthcare distinctive capabilities which support the management of patient emotions. However, a residual degree of ambiguity amongst certain stakeholders remains about the HCA contribution in these terms, with some suggestion that any ‘positive’ input is contingent upon certain management practices.
As the nurse role becomes more specialised, leaving the healthcare assistant (HCA) to deliver much of the direct care, questions arise about the nature of the HCA-patient
relationship. Do HCAs have a distinctive relationship with patients and what form does this take? Addressing these questions, the paper explores the perceptions of different actors with a stake in this relationship using a multi-methods approach. What is the contribution made by different research techniques to an understanding of the relationship?
Artificial neural networks have frequently been proposed for electricity load forecasting because of their capabilities for the nonlinear modelling of large multivariate data sets. Modelling with neural networks is not an easy task though; two of the main challenges are defining the appropriate level of model complexity, and choosing the input variables. This paper evaluates techniques for automatic neural network modelling within a Bayesian framework, as applied to six samples containing daily load and weather data for four different countries. We analyse input selection as carried out by the Bayesian 'automatic relevance determination', and the usefulness of the Bayesian 'evidence' for the selection of the best structure (in terms of number of neurones), as compared to methods based on cross-validation.
This paper assesses the agglomeration pattern of four-digit industries in Germany using a rich data set on the population of German firms. To identify geographical agglomeration, we follow the distance based approach of Duranton and Overman (2005) and find that the location pattern of 78% of our industries departs from randomness in the sense that firms exhibit significant geographical localization. In line with previous studies on manufacturing firms in the UK and France, our analysis suggests that especially traditional manufacturing industries exhibit strong localization patterns. Moreover, we find that geographical localization is not restricted to the manufacturing sector but that it plays an equally, or even more important role in service industries.
Marketers and academic strive to understand and gain insights into the myriad ways in which consumers behave. The question is - who is driving who? Are customers really as complicated as we make them out to be or are marketers creating the monster consumers that they are then trying to serve in innovative ways?
Organization theory is a theory without a protagonist. Organizations are typically portrayed in organizational scholarship as aggregations of individuals, as instantiations of the environment, as nodes in a social network, as members of a population, or as a bundle of organizing processes. This paper hopes to highlight the need for understanding, explicating, and researching the enduring, noun-like qualities of the organization. We situate the organization in a broader social landscape by examining what is unique about the organization as a social actor. We propose two assumptions that underlie our conceptualization of organizations as social actors: external attribution and intentionality. We then highlight important questions and implications forming the core of a distinctively organizational analytical perspective.
Using the German local business tax as a testing ground, we empirically investigate the impact of firm agglomeration on municipal tax setting behavior. The analysis exploits a rich data source on the population of German firms to construct detailed measures for the communities' agglomeration characteristics. The findings indicate that urbanization and localization economies exert a positive impact on the jurisdictional tax rate choice which confirms predictions of the theoretical New Economic Geography (NEG) literature. Further analysis suggests a qualification of the NEG argument by showing that a municipality's potential to tax agglomeration rents depends on its firm and industry agglomeration relative to neighboring communities. To account for potential endogeneity problems, our analysis exploits long-lagged population and infrastructure variables as instruments for the agglomeration measures.
Drawing primarily from Selznick's institutionalism, we make a general case for renewed attention to the "mundane administrative arrangements" that underlie the organizational capacity for value realization and a particular case for the study of value-subverting management innovations. An empirical study of "enrollment management" in liberal arts colleges reveals this ostensibly innocuous innovation's value-undermining effects and identifies the organizational and environmental factors that have made these venerable organizations more or less susceptible to its adoption.
This paper presents a case study on the research and development of an RFID-based work-in-progress container tracking system at a confectionery manufacturer. We report on the management of the RFID project, the system design and the economic evaluation of the solution as compared to the situation before implementing RFID. We discuss the case from a practitioner's view as well as from an academic view regarding the theoretical implications that can be drawn from it. The lessons learned from the project can help other companies to better anticipate the challenges they may experience and make them aware of the possible ways to cope with such challenges prior to starting an RFID implementation.
Over the last decade the flying patterns and foraging behavior of bees have become a matter of public policy in the European Union. Determined to establish a system where transgenic crops can `coexist' with conventional and organic farming, the EU has begun to erect a system of demarcations and separations designed to minimize the extent of `gene flow' from genetically modified plants. As the European landscape is regimented through the introduction of isolation distances and buffer zones, bees and other pollinating insects have become vectors of `genetic pollution', disrupting the project of cohabitation and purification devised by European authorities. Drawing on the work of Michel Serres on parasitism, this paper traces the emergence of bees as an object of regulatory
scrutiny and as an interruptor of the `coexistence' project. Along with bees, however, another uninvited guest arrived unexpectedly on the scene: the beekeeper, who came to see his traditional relationship to bees, crops, and consumers at risk. The figure of the parasite connects the two essential dynamics described in this paper: an escalation of research and the intensification of political attributes.
Anticipating risks has become an obsession of the early twenty-first century. Private and public sector organisations increasingly devote resources to risk prevention and contingency planning to manage risk events should they occur. This book shows how we can organise our social, organisational and regulatory policy systems to cope better with the array of local and transnational risks we regularly encounter. Contributors from a range of disciplines - including finance, history, law, management, political science, social psychology, sociology and disaster studies - consider threats, vulnerabilities and insecurities alongside social and organisational sources of resilience and security. These issues are introduced and discussed through a fascinating and diverse set of topics, including myxomatosis, the 2012 Olympic Games, gene therapy and the recent financial crisis. This is an important book for academics and policy makers who wish to understand the dilemmas generated in the anticipation and management of risks.
• Presents a new analytical take on risk regulation issues by focusing on the role of anticipation • Multi-disciplinary approach gives a broad ranging view of the varying social science debates about risk regulation, including an historical view on risk society debates • Includes detailed case studies covering a range of different regulatory d
On the seventh day of the trial of The State of Tennessee vs. John Thomas Scopes, William Jennings Bryan was cross-examined by Clarence Darrow. What ensued was one of the most famous exchanges in American legal history, and a constant referent in the struggle between religious Fundamentalists and defenders of academic freedom and natural evolution. Many saw in Darrow’s interrogation of Bryan a moment of revelation, a dramatic instantiation of the irreducible conflict between ‘ancient religion’ and modern reason’. This paper returns to the transcript of the trial to examine what form of incommensurability was produced in the course of that celebrated examination. Contrary to the conventional interpretation of the Scopes trial as a moment of mutual untranslatability between irreconcilable positions, an analysis of the conversational structure of the exchange suggests a surprisingly robust and productive dialogue. It reveals a form of difference sustained by a shared grammar of mutual accountability.
The increasing emphasis on understanding the antecedents and consequences of customer-to-customer (C2C) interactions is one of the essential developments of customer management in recent years. This interest is driven much by new online environments that enable customers to be connected in numerous new ways and also supply researchers’ access to rich C2C data. These developments present an opportunity and a challenge for firms and researchers who need to identify the aspects of C2C research on which to focus, as well as develop research methods that take advantage of these new data. The aim here is to take a broad view of C2C interactions and their effects and to highlight areas of significant research interest in this domain. The authors look at four main areas: the different dimensions of C2C interactions; social system issues related to individuals and to online communities; C2C context issues including product, channel, relational and market characteristics; and the identification, modeling, and assessment of business outcomes of C2C interactions.
We survey chief financial officers from 29 countries to examine whether and why firms use lines of credit versus non-operational (excess) cash for their corporate liquidity. We find that these two liquidity sources are employed to hedge against different risks. Non-operational cash guards against future cash flow shocks in bad times, while credit lines give firms the option to exploit future business opportunities available in good times. Lines of credit are the dominant source of liquidity for companies around the world, comprising about 15% of assets, while less than half of the cash held by companies is held for non-operational purposes, comprising about 2% of assets. Across countries, firms make greater use of lines of credit when external credit markets are poorly developed.
In this paper, we introduce a modified collaborative filtering (MCF) algorithm, which has remarkably higher accuracy than the standard collaborative filtering. In the MCF, instead of the cosine similarity index, the user–user correlations are obtained by a diffusion process. Furthermore, by considering the second-order correlations, we design an effective algorithm that depresses the influence of mainstream preferences. Simulation results show that the algorithmic accuracy, measured by the average ranking score, is further improved by 20.45% and 33.25% in the optimal cases of MovieLens and Netflix data. More importantly, the optimal value depends approximately monotonously on the sparsity of the training set. Given a real system, we could estimate the optimal parameter according to the data sparsity, which makes this algorithm easy to be applied. In addition, two significant criteria of algorithmic performance, diversity and popularity, are also taken into account. Numerical results show that as the sparsity increases, the algorithm considering the second-order correlation can outperform the MCF simultaneously in all three criteria.
This paper considers the optimal taxation of two types of financial intermediation services (savings intermediation, and payment services) in a dynamic general equilibrium setting, when the government can also use consumption and income taxes. When payment services are used in strict proportion to final consumption, and the cost of intermediation services is the same across firms, the optimal taxes on financial intermediation are generally indeterminate. But, when firms differ in the cost of intermediation services, the tax on savings intermediation should be zero.
Also, when household time and payment services are substitutes in transactions, the optimal tax rate on payment services is determined by the returns to scale in the conditional demand for payment services, and is generally different to the optimal rate on consumption goods. In particular, with constant returns to scale, payment services should be untaxed. These results can be understood as applications of the Diamond-Mirlees production efciency theorem. The extension to n consumption goods in each period is also studied.
This paper shows that when agents on both sides of the market are heterogeneous, varying in their costs of investment, ex ante investments by firms and workers (or buyers and sellers more generally) may be too high when followed by stochastic matching and bargaining over quasi-rents. The overinvestment is caused by the fact that low-cost agents, by investing more, can increase the value of their outside option and thus shift rent away from high-cost investors. Numerical simulations show that overinvestment can occur given parameter values calibrated to OECD labour markets.
This paper explores the causes and consequences of the remarkable rise of the valueadded tax (VAT), asking what has shaped its adoption and, in particular, whether it has proved an especially effective form of taxation. It is first shown that a tax innovation, such as the introduction of a VAT, reduces the marginal cost of public funds if and only if it also leads an optimizing government to increase the tax ratio. This leads to the estimation, on a panel of 143 countries for 25 years, of a system describing both the probability of VAT adoption and the revenue impact of the VAT. The results point to a rich set of determinants of VAT adoption, and to a significant but complex impact on the revenue ratio. The estimates suggest, very tentatively, that most countries which have adopted a VAT have thereby gained a more effective tax instrument, though this is less apparent in sub-Saharan Africa.
We discuss emerging proposals for border tax adjustments (BTAs) to accompany commitments to reduce carbon emissions in the EU, the US and other OECD economies. The rationale offered for such border adjustment is that various entities, such as the EU, if making commitments to reduce emissions which go beyond those undertaken in other regions of the world, impose added costs on domestic producers which create a competitive disadvantage for them. Some form of remedy is viewed as reasonable to maintain the competitiveness of domestic industries when responding to global environmental problems. In this paper, we argue that despite its current carbon manifestation, the issue of border tax adjustments and both their rationale and their effects on trade are not new and, despite the present debate (which seems to overlook older literature), have arisen before. Earlier debate on border tax adjustments occurred at the time of the adoption of the Value Added Tax (VAT) in the EU as a tax harmonization target in the early 1960’s. But academic literature of the time showed that a change between origin and destination basis in the VAT would be neutral and hence the use of a destination based tax in the EU to accompany the VAT offered no trade advantage to Europe. Here we argue that essentially the same arguments also apply for carbon motivated BTAs, and in the current debate there seems to be a misconception between price level effects and relative price effects stemming from a BTA, which needs correcting. We also argue that the impact of border tax adjustments should be viewed as independent of the motivation of the adjustments.
We use a unique, nationally representative cross-national dataset to document the reduction in individuals' usage of routine non-emergency medical care in the midst of the economic crisis. A substantially larger fraction of Americans have reduced medical care than have individuals in Great Britain, Canada, France, and Germany, all countries with universal health care systems. At the national level, reductions in medical care are related to the degree to which individuals must pay for it, and within countries are strongly associated with exogenous shocks to wealth and employment.
When Karl Weick's seminal article, ‘Enacted Sensemaking in Crisis Situations’, was published in 1988, it caused the field to think very differently about how crises unfold in organizations, and how emergent crises might be more quickly curtailed. More than 20 years later, we offer insights inspired by the central ideas in that article. Beginning with an exploration of key sensemaking studies in the crisis and change literatures, we reflect on lessons learned about sensemaking in turbulent conditions since Weick (1988), and argue for two core themes that underlie sensemaking in such contexts: shared meanings and emotion. We examine when and how shared meanings and emotion are more and less likely to enable more helpful, or adaptive, sensemaking, and conclude with some suggestions for future research in the sensemaking field.
In this paper, we empirically examine how professional service firms are adapting their promotion and career models to new market and institutional pressures, without losing the benefits of the traditional up-or-out tournament. Based on an in-depth qualitative study of 10 large UK based law firms we find that most of these firms do not have a formal up-or-out policy but that the up-or-out rule operates in practice. We also find that most firms have introduced alternative roles and a novel career policy that offers a holistic learning and development deal to associates without any expectation that unsuccessful candidates for promotion to partner should quit the firm. While this policy and the new roles formally contradict the principle of up-or-out by creating permanent non-partner positions, in practice they coexist. We conclude that the motivational power of the up-or-out tournament remains intact, notwithstanding the changes to the internal labour market structure of these professional service firms.
Environmental velocity has emerged as an important concept but remains theoretically underdeveloped, particularly with respect to its multidimensionality. In response, we develop a framework that examines the variations in velocity across multiple dimensions of the environment (homology) and the causal linkages between those velocities (coupling). We then propose four velocity regimes based on different patterns of homology and coupling and argue that the conditions of each regime have important implications for organizations.
We examine a multidisciplinary network established to translate genetics science into practice in the British NHS. Drawing on theory about epistemic communities and objects, we describe three stages in their lifecycle (vision/formation, transformation and reincarnation) and epistemic clashes over knowledge objects. Medical academics captured jurisdiction over the network at formation, through their superior knowledge of the nascent genetics discipline, producing epistemic objects reflecting their interests. A governmental community challenged medical academics for jurisdiction but, unable to transform objects by changing their space of representation in performance reporting, ceased funding the network, which then closed. Afterwards, however, a NHS community successfully ‘reincarnated’ a discarded epistemic object into a technical object in NHS practice. We make a theoretical contribution by developing a processual framework for understanding biomedical innovation, focusing on transforming objects situated between different wider knowledge/power structures. This explains how objects were transformed at micro-level through the interaction and relative power of local communities, influenced by macro-level rules about knowledge formation in wider epistemic, organizational and governmental communities.
The purpose of this paper is to explore general practitioners' (GPs') and psychiatrists' views and experiences of transparent forms of medical regulation in practice, as well as those of medical regulators and those representing patients and professionals. The research included interviews with GPs, psychiatrists and others involved in medical regulation, representing patients and professionals. A qualitative narrative analysis of the interviews was then conducted. Narratives suggest rising levels of complaints, legalisation and blame within the National Health Service (NHS). Three key themes emerge. First, doctors feel “guilty until proven innocent” within increasingly legalised regulatory systems and are consequently practising more defensively. Second, regulation is described as providing “spectacular transparency”, driven by political responses to high profile scandals rather than its effects in practice, which can be seen as a social defence. Finally, it is suggested that a “blame business” is driving this form of transparency, in which self-interested regulators, the media, lawyers, and even some patient organisations are fuelling transparency in a wider culture of blame. A relatively small number of people were interviewed, so further research testing the findings would be useful. Transparency has some perverse effects on doctors' practice. Rising levels of blame has perverse consequences for patient care, as doctors are practicing more defensively as a result, as well as significant financial implications for NHS funding. Transparent forms of regulation are assumed to be beneficial and yet little research has examined its effects in practice. In this paper we highlight a number of perverse effects of transparency in practice.
In 2004, a former FDA medical officer named David Ross watched news coverage of one of the highest-profile pharmaceutical controversies in recent years: Merck’s withdrawal of Vioxx, its bestselling painkiller, from the global marketplace.
At the time, David Ross railed to his wife about the actions of David Graham, an associate director of drug safety at the FDA who drew international attention for testifying before the US Senate about the FDA’s handling of Vioxx. Graham said his supervisors ignored warnings that Vioxx could lead to cardiac arrest, and asked him to change his conclusions on an internal report about Vioxx’s risks.
Today, David Graham is still at the FDA. David Ross is not. But something both have in common is a concern that FDA has not amended policies much in the wake of the Vioxx controversy, and may be, in some ways, seeking to police internal criticism more strictl
Drawing on an analysis of Irving Kirsch and colleagues’ controversial 2008 article in PLoS [Public Library of Science] Medicine on the efficacy of SSRI antidepressant drugs such as Prozac, I examine flaws within the methodologies of randomized controlled trials (RCTs) that have made it difficult for regulators, clinicians and patients to determine the therapeutic value of this class of drug. I then argue, drawing analogies to work by Pierre Bourdieu and Michael Power, that it is the very limitations of RCTs — their inadequacies in producing reliable evidence of clinical effects — that help to strengthen assumptions of their superiority as methodological tools. Finally, I suggest that the case of RCTs helps to explore the question of why failure is often useful in consolidating the authority of those who have presided over that failure, and why systems widely recognized to be ineffective tend to assume greater authority at the very moment when people speak of their malfunction.
This workshop, organised by InSIS, aimed to foster an exchange between different approaches in science and technology studies and political and economic sociology about the study of heterogeneous arrangements – assemblages in which economic relations are always entangled with political, technical, ethical, social and material ones. We ask how this research can engage with the current context, in which the ‘free market’ is becoming a less secure and more openly contested category.
Large transnational professional service firms (PSFs) are highly influential in today's global economy because they underpin the integrity of financial markets, enable complex international transactions, and deliver ideas and advice to the world's largest corporations and governments. They sell expertise – that most intangible of outputs – and they seek to provide customized solutions to demanding clients on a global basis. Many other businesses will face similar challenges in the 21st century, as the world becomes more globally connected and customers become more demanding, seeking customized products or services that fit their particular needs. As such, professional service firms – of which accounting, law, and consulting are prime examples – are critical corporate players in the 21st century.
PSFs also face significant managerial challenges. Over the past few decades, they have grown in complexity, both geographically and in size, to the point where traditional organizational arrangements have proven inadequate. How have PSFs responded, and what lessons can we draw? Our ongoing studies of PSFs show the emergence of a “multiplex” organizational form. Organizations that are successfully implementing the multiplex design are responding to their complex environments by developing highly differentiated structures in order to capture the benefits of deep specialization along several axes. The trick, however, is in pulling these differentiated parts together. The risk that high differentiation will overpower attempts at coordination was responsible for the earlier failure in various industries of the two-dimensional matrix organizational form. But transnational PSFs have gone further by using three axes of differentiation – and appear to have done so successfully. They have discovered how to gain the powerful benefits of multidimensional specialization without losing overall coordinated effort. First, transnational PSFs have developed multiple axes of deep expertise with respect to their professional services, clients, and markets. Second, professionals within these firms are members of multiple practice groups and teams that criss-cross the multiple axes of expertise, thereby creating a landscape of nascent communities in which personal networks can flourish and expert knowledge can be identified. Third, the deep specialization and experience embodied in a firm's resources are pulled together and harnessed via a client management system, an integrative mechanism which classifies and sorts clients according to various strategic and operational criteria, assigns responsibility for each to appropriate senior professionals (partners), and assists in creating semi-permanent or bespoke client teams to address the needs of a client or specific engagement. Fourth, and perhaps most important, cooperation and coordination are achieved through a culture of reciprocity that is developed and sustained by a number of strategic, organizational, and normative processes within the firm.
In this article, we elaborate on this promising organizational design for the 21st century. We use accounting firms as our example, because they are exemplars of the knowledge-intensive, multiplex organizational form, but our research in other professional service organizations confirms that they, too, are using the same practices to develop the same design template. We begin by outlining the challenges for which this organizational form is the emerging solution.
Outcomes in financial markets depend upon the information available to market participants, and upon the contractual commitments that they can make. Many important commitments in financial markets are made outside the formal law, using institutional mechanisms that provide a plausible basis for commitment. Advances in information technology and changes to the political environment alter the institutions that are needed to support economic life, and hence should result in regulatory change. I illustrate this point with examples from investment and commercial banking, and I relate the design of regulatory institutions to the political and technological environment within which they operate.
This paper examines common regulation as cause of interbank contagion. Studies based on the correlation of bank assets and the extent of interbank lending may underestimate the likelihood of contagion because they do not incorporate the fact that banks have a common regulator. In our model, the failure of one bank can undermine the public’s confidence in the competence of the banking regulator, and hence in other banks chartered by the same regulator. Thus depositors may withdraw funds from their, unconnected, banks. The optimal regulatory response to this ‘panic’ behaviour can be to privately exhibit forbearance to the initially failing bank in the hope that it - and hence other vulnerable banks - survives. By contrast, public bailouts are ineffective in preventing panics and must be bolstered by other measures such as increased deposit insurance coverage. Regulatory transparency improves confidence ex ante but impedes regulators’ ability to stem panics ex post.
The paper considers the consequences of competition between two widely used exchange mechanisms, a "decentralized bargaining" market, and a "centralized" market. In every period, members of a large heterogenous group of privately-informed traders who each wish to buy or sell one unit of some homogenous good may opt for trading through one exchange mechanism. Traders may also postpone their trade to a future period. It is shown that trade outside the centralized market completely unravels. In every strong Nash equilibrium, all trade takes place in the centralized market. No trade ever occurs through direct negotiations.
The article discusses technology for supply chain management, focusing on tools for making information on products transparent to both managers and consumers. Among the technologies examined are radio frequency identification tags embedded in products, online databases available for consumers and Webcasting of supplier operations. The complexity of managing this information is acknowledged. The marketing advantages of presenting consumers with information on supply chain practices and products are considered. The use of such information in supply chain management is discussed.
Problem: Emergency surgical patients are at high risk for harm because of errors in care. Quality improvement methods that involve process redesign, such as “Lean,” appear to improve service reliability and efficiency in healthcare.
Design: Interrupted time series.
Setting: The emergency general surgery ward of a university hospital in the United Kingdom.
Key measures for improvement: Seven safety relevant care processes.
Strategy for change: A Lean intervention targeting five of the seven care processes relevant to patient safety.
Effects of change: 969 patients were admitted during the four month study period before the introduction of the Lean intervention (May to August 2007), and 1114 were admitted during the four month period after completion of the intervention (May to August 2008). Compliance with the five process measures targeted for Lean intervention (but not the two that were not) improved significantly (relative improvement 28% to 149%; P<0.007). Excellent compliance continued at least 10 months after active intervention ceased. The proportion of patients requiring transfer to other wards fell from 27% to 20% (P<0.000025). Rates of adverse events and potential adverse events were unchanged, except for a significant reduction in new safety events after transfer to other wards (P<0.028). Most adverse events and potential adverse events were owing to delays in investigation and treatment caused by factors outside the ward being evaluated.
Lessons learnt: Lean can substantially and simultaneously improve compliance with a bundle of safety related processes. Given the interconnected nature of hospital care, this strategy might not translate into improvements in safety outcomes unless a system-wide approach is adopted to remove barriers to change.
Over the past 10 years the sales of Fair Trade goods – particularly those carrying the Fair Trade Labelling Organizations International (FLO) certification mark – have grown exponentially. Academic interest in Fair Trade has also grown significantly over the past decade with researchers analysing the model from a wide range of theoretical perspectives. Whilst Fair Trade is generally acknowledged as a new supply chain model, it has tended to be studied at the micro/organisational level rather than at the macro/systems level. As a consequence, its wider impact as institutional innovation at the field level appears to have been under-theorised so far. In order to address this research gap, this article uses a neo- institutionalist perspective to analyse Fair Trade not simply as a new exchange model working within existing organisational and economic structures, but rather as an agent of institutional entrepreneurship at, and beyond, the field level. From this latter perspective, Fair Trade brings a new set of transformational meanings to extant exchange and consumption models and reforms fields of economic exchange by disrupting and then re-assembling key institutional elements around modern consumption to roll back commodity fetishism and reconnect consumers and producers. The type of institutional change driven by Fair Trade can be seen as a form of social entrepreneurship.
Across the world, a new landscape of social investment has been developing rapidly over the last 10–15 years, yet there has not been an academic study of the phenomenon to date. This paper aims to address this important gap in social entrepreneurship research with new empirical and theoretical work. Theoretically, the paper takes an interpretive approach drawing on institutional theory and other work on the sociology of markets to conceptualize social investment as a socially constructed space within which different investment logics and investor rationalities are currently in play. Using a Weberian analytic lens this paper identifies two ideal type investor rationalities (zweckrational; wertrational) that drive different institutional forms of social investment but also suggests that a third – systemic – rationality can be discerned that combines aspects of both in practice. This analysis suggests a three-part typology of social investment organized according to investor rationality that, in turn, generates a Social Investment Matrix consisting of nine distinct models. Empirically, this paper presents – for the first time – an attempt to quantify the flows of capital within the inchoate social investment landscape. The paper concludes by setting out three possible future scenarios for social investment each representing the ultimate dominance of a singular investor rationality.
Across the world, a new landscape of social investment has been developing rapidly over the last 10–15 years, yet there has not been an academic study of the phenomenon to date. This paper aims to address this important gap in social entrepreneurship research with new empirical and theoretical work. Theoretically, the paper takes an interpretive approach drawing on institutional theory and other work on the sociology of markets to conceptualize social investment as a socially constructed space within which different investment logics and investor rationalities are currently in play. Using a Weberian analytic lens this paper identifies two ideal type investor rationalities (zweckrational; wertrational) that drive different institutional forms of social investment but also suggests that a third – systemic – rationality can be discerned that combines aspects of both in practice. This analysis suggests a three-part typology of social investment organized according to investor rationality that, in turn, generates a Social Investment Matrix consisting of nine distinct models. Empirically, this paper presents – for the first time – an attempt to quantify the flows of capital within the inchoate social investment landscape. The paper concludes by setting out three possible future scenarios for social investment each representing the ultimate dominance of a singular investor rationality.
In 2005, the British parliament passed legislation to make available the first new legal form of incorporation in over a century: the Community Interest Company (CIC). This initiative represented an important element within a larger set of public policy measures that aimed to create a more enabling environment for the accelerated growth of social entrepreneurship and, specifically, social enterprises. In an exploratory study, this paper presents an analysis of the regulatory space within which the reporting and disclosure practices for CICs were negotiated. Three elements within the regulatory space are identified as having explanatory value: regulatory boundaries that set and limit the terms of negotiation around regulatory practice; the key actors that engage in a process of negotiation around the establishment of actual practice; the range of debate and conflicting ideas that inform regulatory negotiation and legitimating consensus. The analysis suggests that a normative logic of light touch regulation was of particular importance within the wider UK policy context from within which CICs emerged and that the CIC Regulator acts as a mediator of disclosure information across multiple user constituencies. Empirically, this paper draws upon a sample of 80 published CIC annual reports to consider two aspects of CIC reporting: the quantity of information provided and the type of data presented. These data demonstrate the limitations and challenges of current CIC regulatory disclosure practices for key users of reporting information, particularly in terms of perceptions of organizational legitimacy. Conclusions are drawn concerning these limitations, particularly in terms of their implications for public policy. In terms of new research, this paper makes two important contributions. First, it develops theory in terms of (social) reporting and public policy with respect to the regulatory mechanisms that relate the two. This has yet to be explored in social entrepreneurship research. Second, this paper includes a preliminary examination of the reporting practices of CICs in their policy context, including an analysis of a sample of the publicly available CIC annual reports that have been filed to date. This data set has yet to be the subject of any other academic research.
Following Kuhn, this article conceptualizes social entrepreneurship as a field of action in a pre-paradigmatic state that currently lacks an established epistemology. Using approaches from neo-institutional theory, this research focuses on the microstructures of legitimation that characterize the development of social entrepreneurship in terms of its key actors, discourses, and emerging narrative logics. This analysis suggests that the dominant discourses of social entrepreneurship represent legitimating material for resource-rich actors in a process of reflexive isomorphism. Returning to Kuhn, the article concludes by delineating a critical role for scholarly research on social entrepreneurship in terms of resolving conflicting discourses within its future paradigmatic development.
Financial institutions are increasingly linked internationally. As a result, financial crisis and government intervention have stronger effects beyond borders. We provide a model of international contagion allowing for bank bailouts. While a social planner trades off tax distortions, liquidation losses and intra- and intercountry income inequality, in the non-cooperative game between governments there are inefficiencies due to externalities, no burden sharing and free-riding. We show that, in absence of cooperation, stronger interbank linkages make government interests diverge, whereas cross-border asset holdings tend to align them. We analyze different forms of cooperation and their effects on global and national welfare.
This paper develops a new theory of multinational capital structure based on legal-system arbitrage: The optimal capital structure for the multinational minimizes the value of the ex post opportunism options created by the diverse legal systems under which the multinational operates. This theory explains the complex mix between parent and subsidiary financing observed in most multinationals, even in the absence of both tax differentials and private information. Optimal capital structures minimize the default premia associated with the multinational's overall financing package by equating the marginal enforceability of debt contracts in the host and headquarters countries. Consistent with extant empirical research, the analysis shows that multinational utilization of local financing will be positively related to the creditor-friendliness of the local legal system. Further, the model provides an explanation for the fact that multinationals do not simply obtain all their financing in the location featuring the most creditor-friendly legal regime. In addition, the model produces many new empirical predictions regarding issues such as the venue selected by the multinational for restructuring, the optimal allocation of capital within the multinational, and the impact of currency risk on credit spreads and financing policy.
This paper assesses the choice of different regulatory policy instruments for crisis management and prevention. We show that Capital and Liquidity Requirements (as Basel III proposes) are not sufficient to ensure Financial Stability. Three different tools are required to address three different inefficiencies. Capital requirements mitigate 'structural' default frictions; liquidity requirements contain the contagious effects of small shocks by stemming the collapse of collateral and asset prices;
and margin requirements protect against excessive leverage and securitization.
We study price formation in securities markets, using the sequential trade framework of Glosten and Milgrom (1985). This paper makes one basic methodological advance over previous research on sequential securities trading: we allow traders to choose from n trade sizes in a multi-period market, where n can be arbitrarily large. We examine how trade size multiplicity affects the intertemporal dynamics of trading strategies, bid-ask spreads, and information revelation. We show that price impact, as a function of trade size, is increasing and exhibits (discrete) concavity.
The introduction of Basel II has raised concerns about the potential impact of risk-sensitive capital requirements on the business cycle. Several approaches have been proposed to assess the procyclicality issue. In this paper, we adopt a general equilibrium model and conduct comprehensive analysis of different proposals. We set out a model that allows to evaluate different rating systems in relation to the procyclicality issue. Our model extends previous models by analysing the effects of different rating systems on banks’ portfolios (as in Catarineu et al. in Econ Theory 26:537–557, 2005) and the contagion effects relevant to financial stability (as in Goodhart et al. in Ann Finance 1:197–224, 2005). The paper presents comparative statics results comparing a cycle-dependent and a neutral rating system from the point of view of banks profit maximization. Our results suggest that banks’ preferences about point in time or through the cycle rating systems depend on the banks’ characteristics and on the business cycle conditions in terms of expectations and realizations.
As an asset class, private equity has generally enjoyed very favorable coverage in the financial media. Academic studies indicate, however, that private equity’s performance is not as robust as the media suggest. In addition, investing in private equity carries unique risks.
Private equity plays an important role in the financing of the corporate sector. An important issue is the attractiveness of this asset class to investors. That is, how well have private equity funds performed for their investors?
The importance of private equity is undeniable, and the returns that are available to investors will in part determine its future success. Against this backdrop the Amsterdam Center for Corporate Finance (ACCF) has decided to devote this issue of its discussion series "Topics in Corporate Finance" to this important topic.
Ludovic Phalippou, an associate professor of the University of Amsterdam, is one of the key researchers in this area. His research, reflected in this booklet, raises some points of concern. In particular, he concludes that the average private equity fund performs below reasonable (i.e. risk-matching) benchmarks. Fees paid by investors are high even when performance is below these benchmarks. Moreover, the contracts between private equity firms and their investors do not align interests, i.e. induce potential conflicts of interest. Professor Phalippou proposes some carefully crafted general guidelines for regulation.
This paper finds that venture capital funds that are expected to be backed by more skilled investors show no performance persistence but a significant flow-performance relationship. In contrast, funds that are expected to be backed by less skilled investors show performance predictability and have a non-significant flow-performance relationship. These results suggest that only skilled investors use all available information to adjust their capital allocation and, as a result, eliminate performance predictability as argued theoretically by Berk and Green (2004). Results also show that Kaplan and Schoar (2005) overstate the persistence in fund performance by not using an ex ante measure of the performance of earlier funds. Whether or not an ex ante measure is used, however, the persistence is largely due to unsophisticated investors. When investors are sophisticated, the performance of earlier funds, sequence and fund size do not help predict the performance of the focal fund.
Today, the terms supply chain and supply chain management are in common use. In the global economy, supply chain is part of everyday business across industries, and retailing is a leading example. On this 25th anniversary of the Oxford Institute of Retail Management we look at how the concept of supply chain management was introduced in retailing and how it has evolved over time. This exercise is based on a selective review of OXIRM publications between 1990 and 2010, mainly articles from the Retail Digest (formerly the European Retail Digest).
Purpose – This paper aims to present electronic procurement benefits identified in four case companies from the information technology (IT), hi-tech sector.
Design/methodology/approach – Multi-case study design was applied. The benefits reported in the companies were analysed and classified according to taxonomies from the information systems discipline. Finally, a new benefits classification was proposed. The framework was developed based on information systems literature.
Findings – The research confirmed difficulties with benefits evaluation, as, apart from operational benefits, non-financial, intangible benefits at strategic level were also identified. Traditional evaluation methods are unable to capture all benefits categories, especially at strategic level. New taxonomy was created, which allows evaluation of the complex e-procurement impact. In the proposed taxonomy, e-procurement benefits are classified according to their level (operational, tactical, strategic), area of impact, applying scorecard dimensions (customer, process, financial, learning and growth). In addition the benefits characteristic is captured (tangible, intangible, financial and non-financial).
Research limitations/implications – Research is based on four case studies only. Findings are specific to case companies and the environment in which they operate. The framework should be tested further in different contexts.
Practical implications – The new taxonomy allows evaluation of the complex e-procurement impact, demonstrating that benefits achieved do not concern merely the financial impact. The framework can be applied to preparing new systems implementation as well as to evaluating existing systems.
Originality/value – The paper applies information systems frameworks to the electronic procurement field, which allows one to look at e-procurement systems considering its complex impact. The framework can also be used to evaluate different systems, not simply e-procurement.
If the Roll critique is important, changes in the variance of the stock market may be only weakly related to changes in aggregate risk and subsequent stock market excess returns. However, since individual stock returns share a common sensitivity to true market return shocks, higher aggregate risk can be revealed by higher correlation between stocks. In addition, a change in stock market variance that leaves aggregate risk unchanged can have a zero or even negative effect on the stock market risk premium. We show that the average correlation between daily stock returns predicts subsequent quarterly stock market excess returns. We also show that changes in stock market risk holding average correlation constant can be interpreted as changes in the average variance of individual stocks. Such changes have a negative relation with future stock market excess returns.
This chapter explores the origins of the theme of competitive advantage in 19th and early 20th century economics. This theme, which forms the core of modern Strategic Management, was a battleground for debates about the value of abstract theory versus observations about real-life events. Intellectual genealogies, citations, and other sources show the central roles played by the University of Vienna and Harvard University. These two institutions strongly influenced the theory of monopolistic competition as well as all three modern views of competitive advantage – the industrial as expressed by Porter, the resource-based as expressed by Penrose, and the evolutionary as expressed by Schumpeter.
We present an ordinal method for studying persistence in firm profitability. The method is based on the degree of stability in a ranked performance distribution over time. The method gives a numerical index of rank friction (Rf) that can be applied to any ranked data over any period of time. Rf is nonparametric and can be used to test theoretical assumptions in strategic management. We illustrate the method in an empirical study of 40 years of profit data in 12 industries.
Synthesizing knowledge from psychology and marketing research, an understanding of nonverbal communication can help address when and how customers express their underlying feelings in retail interactions that are not evident in direct verbal expressions. Examining nonverbal behavior as an indirect measure of consumer response can enable retailers to better understand the needs of their customers. Nonverbal communication theory is used to develop a conceptual framework that builds on prior research on the situation, expressivity, social status, display rules, and their effects on customer expression. Lay wisdom suggests that customer expression should be revealing (e.g., “the eyes are the windows to the soul”). However, research reveals a myriad of situational factors that may lead customers to mask their true feelings. This paper offers nine theoretical propositions and summarizes research evidence related to these pro-positions from various substantive domains for marketing research.
We propose a protocol optimization technique that is applicable to both weighted and unweighted graphs. Our aim is to explore by how much a small variation around the shortest-path or optimal-path protocols can enhance protocol performance. Such an optimization strategy can be necessary because even though some protocols can achieve very high traffic tolerance levels, this is commonly done by enlarging the path lengths, which may jeopardize scalability. We use ideas borrowed from extremal optimization to guide our algorithm, which proves to be an effective technique. Our method exploits the degeneracy of the paths or their close-weight alternatives, which significantly improves the scalability of the protocols in comparison to shortest-path or optimal-path protocols, keeping at the same time almost intact the length or weight of the paths. This characteristic ensures that the optimized routing protocols are composed of paths that are quick to traverse, avoiding negative effects in data communication due to path-length increases that can become specially relevant when information losses are present.
This paper compares the stability of the U.S. Dual Banking system’s two bank groups, national and state banks, in light of the global financial crisis 2007/2008. The goal of the paper is to answer three distinct questions: first, is there a difference in the (balance sheet-) fragility between the two groups and, second, to what extent has the balance sheet fragility of both groups changed after the escalation of the financial crisis beginning in August 2007? Building on that, the third question asks to whether or not the respective regulatory agencies of both bank groups are responsible for these changes in balance sheet fragility in light of the financial crisis. To answer these questions the paper uses U.S. Call Report data containing full quarterly balance sheets and P&Ls of all U.S. commercial banks over the period 2005-2008. Anecdotal evidence as well as univariate and multivariate difference-in-difference methodology focusing on the immediate pre-crisis period Q1/2005 to Q3/2007 and the crisis period Q3/2007 to Q4/2008 are applied. Highly significant and robust results show that, ceteris paribus, national banks reduced their potential balance sheet fragility after the escalation of the crisis in August 2007 by reducing lending and liquidity creation stronger than state banks. Anecdotal evidence supports the empirical findings. Although both FDIC and OCC did not anticipate the adverse effects of the crisis, the OCC publicly showed an earlier reaction to liquidity-related problems than the FDIC. The paper is the first of its kind to analyze bank fragility around the escalation of the financial crisis and the role of the regulatory agencies. The paper holds especially interesting policy implications in the light of the current discussion about the future regulation of the banking markets.
The author looks at diverse concepts and roles of trust in the challenge of decarbonising energy systems, drawing on 25 years of personal experience in the fields of energy and environmental policy research. The paper focuses on three issues-public trust in science, institutional trust in making technology choices, and the idea that high-trust societies are more sustainable than those exhibiting low-trust. While trust is a key concept in understanding the public acceptability of technology choices, it is only one of a suite of interrelated concepts that must be addressed, which also includes liability, consent, and fairness. Furthermore, rational distrust among competing institutional world views may be critical in understanding the role of social capital in socioeconomic and technological development. Thus the concept of trust has become a portmanteau, carrying a diverse range of ideas and conditions for sustainable energy systems. The paper concludes with three emphases for decision makers. First, the issue is the energy system, not particular generating technologies. Second, the energy system must be recognized to be as much a social system as it is a technical one. Third, the system requires incorporation of the minimum level of diversity of engineering technologies and social actors to be sustainable.
Management consultants have long been recognized as carriers of management knowledge and disseminators of management fashions. While it is well understood how they promote the acceptance of their concepts, surprisingly little has been said about their strategies to promote the acceptability of their services. In this paper, we elaborate a typology of strategies by which management consultancies can create and sustain such “institutional capital” (Oliver, 1997) that helps them extract competitive resources from their institutional context. Drawing on examples from the German consulting industry, we show how localized competitive actions can enhance individual firm’s positions, but also the collective institutional capital of the consulting industry as a whole, legitimizing consulting services in broader sectors of society and facilitating access to requisite resources. Our accounts counter prevailing imagery of institutional entrepreneurship as individualistic, “heroic” action and demonstrate how distributed, embedded actors can collectively shape the institutional context from within to enhance their institutional capital.
Purpose – The paper has three objectives: first, to reflect on the contribution of this journal to the study of retail location assessment and decision-making; second, to use the results of a questionnaire survey of retailers to assess the employment of location assessment techniques a decade since a similar survey conducted by Hernández and Bennison (2000); third, in the light of these results, to conclude what likely challenges the location planning profession will face over the next decade.
Methodology - Employs an online questionnaire survey of retailers across a range of sizes and sub-sectors.
Findings – We find that specialist location planning teams within retailers are small with established forecasting processes firmly established for new or relocated stores – indicative of less activity focused on the management of the existing portfolio or the identification of outlets within the network for rationalisation. The vast majority of site assessment techniques increased in use over the decade reflecting a greater reliance on data and analysis to inform decision-making alongside the traditional use of experience and intuition. Complementing highly technical evaluation techniques, the site visit is widely recognised as informing modelling and subsequent decision-making.
Research limitations – The survey sample is smaller and contains a greater proportion of larger businesses than that undertaken by Hernández and Bennison (2000).
Originality & value – Underlines the changes in location planning sophistication a decade on from a landmark survey. Suggests the implications of the observed changes and identifies likely developments in the profession.
This paper investigates the impact of corporate taxes on the input factor choice of multi-jurisdictional entities (MJEs) under a formula apportionment (FA) regime. Our testing ground is the German local business tax that applies FA regulations with income apportionment according to the relative payroll share. Using unique data on the population of German ﬁrms, we ﬁnd that MJEs distort their employment and payroll costs in favor of low-tax locations. On average, a 1-percentage-pointincrease in the tax rate differential between an afﬁliate and foreign group members is found to lower the afﬁliate’s payroll to capital ratio by 1.9%.
This paper presents a new approach to estimating the existence and magnitude of tax-motivated income shifting within multinational corporations. Existing studies of income shifting use changes in corporate tax rates as a source of identification. In contrast, this paper exploits exogenous earnings shocks at the parent firm and investigates how these shocks propagate across low-tax and high-tax multinational subsidiaries. This approach is implemented using a large panel of European multinational affiliates over the period 1995 -2005. The central result is that parents’ positive earnings shocks are associated with a significantly positive increase in pretax profits at low-tax affiliates, relative to the effect on the pretax profits of high-tax affiliates. The result is robust to controlling for various other differences between low-tax and high-tax affiliates and for country-pair-year fixed effects. Additional tests suggest that the estimated effect is attributable primarily to the strategic use of debt across affiliates. The magnitude of income shifting estimated using this approach is substantial, but somewhat smaller than that found in the previous literature.
Revenue management (RM) enhances the revenues of a company by means of demand-management decisions. An RM system must take into account the possibility that a booking may be canceled, or that a booked customer may fail to show up at the time of service (no-show). We review the Passenger Name Record data mining based cancellation rate forecasting models proposed in the literature, which mainly address the no-show case. Using a real-world dataset, we illustrate how the set of relevant variables to describe cancellation behavior is very different in different stages of the booking horizon, which not only confirms the dynamic aspect of this problem, but will also help revenue managers better understand the drivers of cancellation. Finally, we examine the performance of the state-of-the-art data mining methods when applied to Passenger Name Record based cancellation rate forecasting
How do corporate legal departments and law firms make decisions about ‘making’ or ‘buying’ legal services? In what ways are their decisions governed by the usual criteria and factors identified in economic and managerial theories of the firm? And to what extent are lawyers’ make‐or‐buy decisions affected by professionalism and partnerships that govern the legal profession? This paper addresses these questions by generating five propositions arising out of various theories, concerning (1) the link between task modularity and organizational modularity, (2) knowledge interdependence complicating this link, (3) rent‐seeking, property‐rights, incentive‐alignment, and decision‐making adaptation motives for make‐or‐buy decisions, (4) the impact of managerial hierarchy on make‐or‐buy decision, and (5) the industry‐level distribution of capabilities affecting value chain disintegration. The paper discusses some evidence in legal services in support of these propositions, and raise questions for further research.
Choosing between outsourcing and shared services has significant implications for long-term corporate strategy.
– The need for an efficient provision of product variety has been widely established as a means of competing in the marketplace, yet previous studies into the management of product variety have commonly analysed products in isolated developed markets. The purpose of this paper is to investigate how firms manage their product variety in emerging markets. This paper aims to investigate the rationale underlying the restriction of variety in such settings, and define general mechanisms by which firms can adapt their product variety when operating in both emerging and developed markets simultaneously.
– The paper uses the case of a global vehicle manufacturer that offers common products across developed and emerging markets to illustrate the difference between them in terms of product variety, and examine the process that underlies its management. The paper utilises a combination of data collection techniques.
– The paper shows empirically how product variety (in particular ex factory variety) is restricted in emerging markets, as one would expect, and it identifies the mechanisms by which product variety is managed across different markets. The paper further illustrates how emerging markets have developed secondary coping mechanisms to deliver additional variety through late configuration in the distribution system.
– By examining the differential management of product variety in emerging and developed markets, the findings yield several novel aspects by providing both empirical evidence and explanations for the restriction of product variety in emerging markets.
For over three centuries and throughout the globe, people have enthusiastically bought savings products that incorporate lottery elements. In lieu of paying traditional interest to all investors proportional to their balances, these Prize Linked Savings (PLS) accounts distribute periodic sizeable payments to some investors using a lottery-like drawing where an investor’s chances of winning are proportional to one’s account balances. This paper describes these products, provides examples of their use, argues for their potential popularity in the United States —especially to low and moderate income non-savers—and discusses the laws and regulations in the United States that largely prohibit their issuance.
Over the past half century, consumers in Australia have increasingly been
confronted with a plethora of health food products. This paper focuses on health
food that encourages consumption through the promise of health benefits. In this
context, media representation of such food serves as a lens to explore the spread
of consumer culture in Australia. Using a historical perspective, this paper asserts
that in promoting such foods, food “experts” form an advisory nexus in an increasing
context of “gastro-anomy” that Fischler (1980) speaks of. Fifty years of advertising,
editorial content and articles are examined from the Australian Women’s Weekly.
Warde’s (1997) antinomies of tastes are used as a starting point to show how the
anxiety and risks associated with food consumption
Over the past decade consumers in Australia and elsewhere have increasingly been confronted with a fast growing number of health food products. This profusion of health foods is accompanied by a proliferation in popular culture of professional nutritional advice on what is 'good to eat'. The genre of lifestyle magazines is one popular medium via which healthy practices and health foods are frequently reported. In this paper we use a visual discourse analysis of food-related editorial and advertorial content sourced from the long running and popular Australian Women's Weekly to investigate how lifestyle magazines have been one important locus for constituting health conscious consumers. Taking up a Foucauldian governmentality perspective we trace how this active, responsible conceptualization of the consumer, which we refer to 'healty food consumer', has increased in prevalence in the pages of Australian Women's Weekly over time. Based on our analysis we suggest that the editorial and advertorial content offers models of conduct to individuals about what possible preventative activities in which to engage, and plays an important role in shaping how we think about taking care of our health through eating.
Archaeology abounds in visual media, both media artifacts from the past, as well as means of documenting and studying those artifacts. Classic and long-established approaches to visual media include iconography and iconology (semantics and the identification of visual content), semiotics (the systems and structures of communication, signification and meaning), as well as graphics, cartography, planning and charting (communicative efficacy, the geometry of 2D to 3D translation, and information compression.
We shift emphasis in this paper away from communication, iconology, and visuality per se, the content and structure of imagery, toward the way visuality works in archaeology, from visual media as material forms (graphics, maps, photographs) to the work that visual media perform in archaeology. Along the way we present a criticism of the stress placed in much discussion of visual media on their representational qualities, that is, their fidelity to what they are taken to represent, to their mimetic qualities and their degree of correspondence to what is represented.
It is not that we consider such inquiry to be wrong, but rather that communication and meaning are often secondary functions of media. Ironically, what often matters most about visual media, we would claim, is not what they represent, but the way they fit into archaeological work on the remains of the past. In this development of McLuhan’s old adage that it is the medium that matters, we focus attention on practice and discourse, drawing particularly on the field of science and technology studies (STS). This emphasis upon the way images work is why we term our interest one in the political economy of visual media—recovering the work done by visual media in archaeology through networks of production, distribution and consumption. This leads us to identify some of the implications of new digital media, not for more spectacular summations of data about the past or photorealistic simulations, but as open fora for the co-production of pasts that matter now and for visions of future commmunity.
This paper investigates the structure of firms’ outward FDI and their behaviour at home in both manufacturing and business services sectors. UK multinationals with overseas affiliates in low-wage economies invest simultaneously in a large number of high-wage countries. I find that more productive multinationals operate in a greater number of countries, consistent with their being able to bear the fixed costs of investing in numerous locations abroad. UK manufacturing plants owned by large-scale, low-wage economy outward investors display lower domestic employment growth, in particular in low-skill activities, consistent with low-wage economy labour substituting for low-skill labour in the UK.
The main contribution of this paper is the demonstration that, contrary to conventional thinking, a measurable increase in the operational complexity of the production scheduling function between two companies can occur following closer supply chain integration. The paper presents the practical application of previous work carried out and validated by the authors in terms of (a) methodology for measuring operational complexity, (b) predicted implications of Supplier–Customer integration and (c) derivation of an operational complexity measure applied to before and after Supplier–Customer integration. This application is illustrated via a longitudinal case study. The analysis is based on information theory, whereby operational complexity of a Supplier–Customer system is defined as the amount of information required to describe the state of this system. The results show that operational complexity can increase when companies decide to integrate more closely, which is a fact likely to be overlooked when making decisions to pursue closer Supply-Chain integration. In this study, operational complexity increases due to reduced buffering arising from reduction in the Supplier's inventory capacity. The Customer did not change their operational practices to improve their schedule adherence post-integration, and, consequently, suffered an increase in complexity due to complexity rebound. Both the Supplier's and Customer's decision-making processes after the case study reported in this paper were enhanced by being able to quantify the complex areas to prioritise and direct managerial efforts towards them, through the use of the operational complexity measure. Future work could extend this study (in the ‘low product customisation’ and ‘low product value impact’ quadrant) to investigate Supplier–Customer integration in other quadrants resulting from further combinations between ‘product customisation’ and ‘product value impact’ levels.
The pooling method of accounting for business combinations was banned in the USA in 2001 and by the International Accounting Standards Board (IASB) in 2004. Although the US ban was controversial, the IASB ban received much less opposition. We review comment letters to the IASB and discuss possible reasons for this disparity. Understanding why reactions to similar proposals can be so different is increasingly important as efforts to converge US generally accepted accounting principles and International Financial Reporting Standards (IFRS) gather pace. Our paper investigates the consultative stage for IFRS 3, exploring various aspects of the proposed standard, including elimination of pooling, non-amortisation of goodwill and consideration of fresh start. While the earlier US process possibly contributed to the lack of controversy around the IASB proposals, we suggest that other factors also played an important part. This study contributes to our understanding of factors that shape accounting reforms.
Social commerce is an emerging trend in which sellers are connected in online social networks and sellers are individuals instead of firms. This article examines the economic value implications of a social network between sellers in a large online social commerce marketplace. In this marketplace, each seller creates his or her own shop, and network ties between sellers are directed hyperlinks between their shops. Three questions are addressed: (1) Does allowing sellers to connect to each other create value (i.e., increase sales)? (2) What are the mechanisms through which this value is created? and (3) How is this value distributed across sellers in the network and how does the position of a seller in the network (e.g., its centrality) influence how much he or she benefits or suffers from the network? The authors find that (1) allowing sellers to connect generates considerable economic value, (2) the network's value lies primarily in making shops more accessible to customers browsing the marketplace (the network creates a “virtual shopping mall”), and (3) the sellers who benefit the most from the network are not necessarily those who are central to the network but rather those whose accessibility is most enhanced by the network.
This research describes the findings from an interpretive case study that explores the interplay between social computing (SC) and enterprise systems (ES). A fundamental shift is evident in how organisations become more effective through the adoption of SC capabilities. As process centric ES continues to pose challenges, an SC inspired, people-centric ES has become a medium for efficient interaction and collaboration across the divisions of an organisation. In this organisational reality, we explore the role of virtual co-presence of users on collaboration in ES.
Our findings indicate that virtual co-presence enabled interactions, when focused and sustained over time, could facilitate collaboration for sharing of knowledge. An understanding of how users interact in mediated encounters contributes to our knowledge of how focused interactions may enable collaborations in ES. By drawing on the findings, the research seeks to outline some implications for the practice of a collaborative ES for the contemporary organisations.
This paper examines the complex relationship between the embeddedness of multinational enterprises (MNEs) in host-country political networks and their long-run competitive positions in host emerging markets. We report the findings of a longitudinal study of the Chinese automobile sector from the early 1980s to the mid 2000s. Using data from 142 interviews over 11 years, and a wide range of secondary sources, we explore the process through which the value of political embeddedness changed over time in the face of profound and rapid changes in host-country business environments. On the basis of this longitudinal study, the paper unravels the underlying mechanisms that lead to the declining, and even negative, value of deep political embeddedness by MNEs in a politically stable emerging economy.
Institutional change has been characterized as the outcome of a dialectical process whereby different constituent communities within an organizational field promote competing institutional logics (Seo and Creed 2002). However, the dynamics of this dialectical process are poorly understood. In this paper, we examine this dialectical process by drawing upon a longitudinal study of a policy intervention in the UK aimed at promoting a logic of knowledge production in genetics science (termed here as ‘Mode 2’; cf. Nowotny et al. 2001) that was co-present, and competing, with the dominant logic surrounding the production of academic science (‘Mode 1’). We highlight the tensions and interplays that occurred between these competing institutional logics by examining the rhetoric that was propounded, and the actions incurred, and their effects, amongst constituent communities of policy makers and scientists. Our findings demonstrate, first, that tensions can exist within as well as across constituent communities within the organizational field; and, second, how mobilizing a new institutional logic related to knowledge production may produce its own contradictions that can, paradoxically, lead to the simultaneous resurrection (and reinforcement) of the old logic. We discuss the implications for managing projects where these different logics are co-mingled.
This paper introduces five new univariate exponentially weighted methods for forecasting intraday time series that contain both intraweek and intraday seasonal cycles. Applications of relevance include forecasting volumes of call centre arrivals, transportation, e-mail traffic and electricity loads. The first method that we develop extends an exponential smoothing formulation that has been used for daily sales data, and which involves smoothing the total weekly volume and its split across the periods of the week. Two new methods are proposed that use discount weighted regression (DWR). The first uses DWR to estimate the time-varying parameters of a model with trigonometric terms. The second introduces DWR splines. We also consider a time-varying spline that uses exponential smoothing. The final new method presented here involves the use of singular value decomposition followed by exponential smoothing. Empirical results are provided using a series of intraday call centre arrivals.
Alysha De Livera’s discussion focuses on the transformation of the data and the inclusion of ARMA terms for the residuals of the exponential smoothing models. Drawing on the formulation for the HWT model, I implemented all exponential smoothing models with an AR(1) term included for the residual. In my paper, I applied a logarithmic transformation to the NHS Direct data prior to fitting all methods. The BATS model, presented in Alysha De Livera’s discussion, is a generalisation of the implementation of the HWT model in my paper, with a Box-Cox transformation and ARMA terms of different lags for the residual.
Online short-term load forecasting is needed for the real-time scheduling of electricity generation. Univariate methods have been developed that model the intraweek and intraday seasonal cycles in intraday load data. Three such methods, shown to be competitive in recent empirical studies, are double seasonal ARMA, an adaptation of Holt-Winters exponential smoothing for double seasonality, and another, recently proposed, exponential smoothing method. In multiple years of load data, in addition to intraday and intraweek cycles, an intrayear seasonal cycle is also apparent. We extend the three double seasonal methods in order to accommodate the intrayear seasonal cycle. Using six years of British and French data, we show that for prediction up to a day-ahead the triple seasonal methods outperform the double seasonal methods, and also a univariate neural network approach. Further improvement in accuracy is produced by using a combination of the forecasts from two of the triple seasonal methods.
This report summarises the output of a six-month, exploratory project that brought together an interdisciplinary network of leading academics and EEF member companies to address the question: ‘What are the barriers to the adoption of high performance work practices in manufacturing firms and how can they be overcome?’
The coalition government has shown the potential of political brokering. At an organisational level, write Jonathan Trevor and Philip Stiles, this is a role HR should take on board if it wants to create the innovative cultures needed for long-term success
A new entrant in the nascent online peer lending space, Lending Club must decide whether or not to register with the SEC. Lending Club provided a platform through which individual borrowers could receive loans funded by individuals who chose to invest in them. The management team wanted to grow the business and also hoped to establish a secondary market to give lender members liquidity. The SEC had raised questions about whether or not the promissory notes issued to lender members were in fact securities, but there were legal arguments on both sides. While the legal situation was unclear, Lending Club considered the benefits of applying to the SEC, but had to decide whether it would be worth the significant investment of time and money, both up front and going forward.
Students must determine whether or not Visa, which had an IPO one month prior, is a good investment. The case provides an overview of multi sided platform businesses and the payments industry in general. Visa's business model and economics are reviewed.
Organizations struggle to balance simultaneous imperatives to exploit and explore, yet theorists differ as to whether exploitation undermines or enhances exploration. The debate reflects a gap: the missing theoretical mechanism by which organizations break free of old routines and discover new ones. We propose that the missing link is perturbation: novel stimuli that disrupt the execution of specialized routines. Perturbation creates opportunities for organizations to invoke exploratory, general-purpose problem-solving routines. In mature organizations, exogenous perturbations become increasingly scarce to the point that exploration is stifled and inertia sets in. We theorize that mature organizations can sustain exploration by deliberately inducing perturbations in their own processes. Our theory yields testable hypotheses about the relationships between exploitation, perturbation, and exploration. We provide illustrations from The Toyota Motor Company to show how deliberate perturbation enables efficient exploration in the midst of intense exploitation.
This paper examines the extent of international headquarter relocations worldwide. About 6 percent of all multinationals relocated their headquarter to another country in the 1997-2007 period. The paper presents empirical evidence on the role of tax in these relocation decisions. It considers a sample of 140 multinationals that relocated their head- quarters over the past decade and compares them to a control group of 1943 multinationals that have not done so. It is found that the additional tax due in the home country upon repatriation of foreign profits has a positive effect on the probability of relocation. The empirical results suggest that an increase in the repatriation tax by 10 percentage points would raise the share of relocating multinationals by 2.2 percentage points, equivalent to an increase in the number of relocations by more than one third. Furthermore, the introduction of controlled foreign corporation legislation also has a positive effect on the number of relocations.
This paper analyzes enhanced cooperation agreements in corporate taxation in a three country tax competition model where countries differ in size. We characterize equilibrium tax rates and the optimal tax responses due to the formation of an enhanced cooperation agreement. Conditions for strategic complementarity or strategic substitutability of tax rates are crucial for the welfare effects of enhanced cooperation. Simulations show that enhanced cooperation is unlikely to be feasible for small countries. When enhanced cooperation is feasible, it may hamper global harmonization. Only when countries are of similar size is global harmonization a feasible outcome.
In recent years the subject of decision making on large transport infrastructure projects and related institutional issues have received much attention in the academic and professional literature, partly triggered by the book Megaprojects and Risk (Flyvbjerg et al., 2003). This book shows that for large infrastructure projects cost overruns and demand shortfalls are very common, and that institutional factors play an important role in this being the case. Recent academic contributions include special issues in Environment and Planning B (2007) and Transportation Planning and Technology (2007) and the book 'Decision-making on mega projects. Cost-benefit analysis, planning and innovation' (Priemus et al., 2008).
The philosopher of art Roger Scruton has claimed that photographic images are not representations, on the basis of the role of causal rather than intentional processes in arriving at the content of a photographic image (Scruton, 1981). His claim was controversial at the time, and still is, but had the merit of being a springboard for asking important questions about what kinds of representation result from the technologies used in depicting and visualising. In the context of computational picturing of different kinds, in imaging and other forms of visualisation, the question arises again, but this time in an even more interesting form, since these techniques are often hybrids of different principles and techniques. A digital image results from a complex interrelationship of physical, mathematical and technological principles, embedded within human and social situations. This paper consists of three sections, each presenting a view of the question whether digital imaging and digital visual artefacts generally are representations, from a different perspective. These perspectives are not representative, but aim only to accomplish what Scruton’s paper did succeed in accomplishing, that is, being a provocation and a springboard for a broader discussion.
Futures practices have always sought to bridge longer term, context uncertainty and today's actions.
Despite the emergence of diverse foresight lineages, methods and tools, the differences between 'better proactive
foresight' and 'better reactive preparedness' remain unclear. This paper focuses on the 2007-2010 financial
crisis in order to clarify misconceptions and confusions concerning 'scenario planning'. We assess why the crisis
is not unique and propose how scenarios might be helpful in overcoming the difficulties of learning from crisis.
We focus on how scenarios were used in the run up to this crisis to clarify the nature, role and effectiveness
of scenario work. We highlight implications for scholarship and practice, including: overcoming simplistic distinctions
of scenarios as products or processes; and as outputs or inputs. We assess the power of scenarios as
frames and their role in re-framing strategic conversation; and contrast the misapplication of probability in
systemic risk analysis with the co-production of plausibility, between builders and users of scenarios. Finally,
we explore why the promises of deploying scenarios to address normal accidents and systemic risks are not yet
Recently I presented, with others, a general statement of the sequence of social and intellectual processes which characterize the emergence, growth and final decline of specific areas of scientific endeavour. A central concern of my own research has been to examine the extent to which scientific activity in one particular area, that associated with research into the pulsar phenomenon, corresponds to the sequence of processes described in the theoretical statement. An obvious preliminary objective of my research was to write an outline history of the intellectual development of the pulsar area. Although many of the methodological problems relating to the investigation of the social development of scientific specialties have already been examined it is less widely realized that methodological problems of equal difficulty occur in the analysis of the intellectual development of specialties, even though much of the basic data can be obtained without intervention in the on-going social process. In this paper I report my
attempt to describe the intellectual history of pulsar astronomy. When I began this part of my study I was unaware of any special methodological problems. As I progressed, I not only became acutely aware that there were such problems, but I also realized they would prevent me from carrying out
my original intention of providing a straightforward chronological history of this particular intellectual
It is a routine teaching day. The advanced level course in science and technology studies (S&TS) is holding its fourth weekly class of the semester. The students dutifully indulge the professor in his incantation of one of the iconic case studies of the ﬁeld: Langdon Winner’s well-known
analysis of Moses’ bridges. Winner claims that the bridges built by Moses on Long Island are an example of a technology which has political qualities: by this he means that the bridges were designed (consciously or unconsciously) to have a particular social effect.
Although the notion of netnography as a set of tools for exploring consumer behaviour online is not new, the potential of netnographic methods in market research and analysis is still largely undeveloped. In this article, we explore the ways in which netnographic techniques can be used in particular to understand the characteristics and effectiveness of electronic word-of-mouth (eWOM), an increasingly significant influence on the consumer’s decision-making process. We provide an assessment of the main strengths, weaknesses and ethical concerns associated with the use of netnographic techniques. Unlike previous online ethnographic studies which tend to employ broader socio-cultural observations, we analyse consumers’ information gathering and purchasing activities on a discussion forum. We relate our findings to a model which sets out three components of communications effectiveness: modes of persuasion that are based upon authority, emotion or logic. We conclude by reviewing the implications of netnography for both academic research and marketi
This paper considers the impact of companies' external communications about strategy on their reputations. Such strategy communications can take many forms, including material in companies' own annual reports and the media, and the impact may be felt in various ways, including consumer loyalty and managerial rankings such as the Fortune Most Admired list. In this paper, however, the empirical focus is on the growing phenomenon of companies' own forward-looking communications about their strategies, and reputational impact is considered in terms of consequences in the financial markets. Specifically we address the spread of such strategy communications, abnormal stock returns due to these communications, and the nature of these communications in terms of their language, themes and strategies involved.
This paper traces the evolution of strategic planning practice through a content analysis of advertisements for strategic planning jobs in the New York Times from 1960 to 2003. We address two issues. First, responding to prominent criticisms of analytical approaches to strategic planning, we consider the role of analysis in strategic planning jobs over time. Second, following the association of the MBA degree with these analytical approaches, we examine the relationship of MBA qualifications to analysis in strategic planning jobs. We find no significant shift from analysis in advertised strategic planning jobs in recent years. The MBA appears strongly associated with analytical strategic planning, but not solely. We conclude that analysis has a robust role in strategic planning and discuss implications for strategy practice, research, and education.
In this paper we use discourse analysis to investigate the process of institutionalization and change of strategic planning practices over four decades. We propose a model that connects the production and consumption of text to the process of institutionalization and its subsequent expression in new practices. To do this, we use the empirical material of job advertisement for strategic planners and their content. To map and analyze the semantic network of concepts in strategic planner adverts, we used ReseauLu (RL) (Cambrosio et al, 2004; Mogoutov, 2008), a network analysis software designed specifically for the treatment and mapping of complex, heterogeneous relational data so that they can be visually inspected and interpreted.Given that discourse is an important element in the process of social change (Fairclough, 2005), we posit that the content of job advertisement participates in the process of institutionalization of strategic planning as an occupation. We find that vocabulary forms a discourse that participates in the institutionalization of strategic planning as an occupation and as a practice. “Long”, “range” and “planning” tends to disappear in favor of “strategy” between the 1960s and the 1990s. Only “planning” survives the test of time as it is combined with other terms such as “strategic” and has acquired a generic sense that is less connected to a specific period in time. We conclude that that the institution of strategy changed over the last forty years from a primarily corporate orientation, concerned with extending budget-disciplines over the long-term, to a more strategic one concerned with reactivity, integration of business-level initiatives and competition. However, the activity of strategizing remains centered on planning, an activity requiring analytical skills, synthesis and creativity.
So far, SAP scholars have explored the question of what strategists “do”, providing at least three different types of answers to this question: strategists talk, they write texts and they use various kinds of tools—models, methodologies, etc. The pervasiveness of ethnographic research in the area has led to a predominance of studies dealing with strategic talk and agency. There is still realtively little research on the production and use of texts and tools; we believe that there is a need to bring to the fore the textual, intertextual and material aspects of strategy as practice. In addition, research on the connections between talk, texts and tools would be very welcome: how do managers talk about texts and tools? How are texts and material objects mobilized during the flow of conversations? How do strategists “textualize” their conversations and their use of tools? How are the choice and usage of tools related to other aspects of the practice of strategy? And finally, what are the best methodologies for answering these questions?
The association between quality assessment and status attainment is fundamental in the sociology of markets. However, past literature failed to explain status dynamics such as mobility in the status order where highly uncertain goods make quality hard to measure. This paper contributes to the understanding of this relationship by identifying various social mechanisms behind quality evaluation processes. It also explores why certain products are considered to be more valuable than others. The results demonstrate that only through exchange relations and active participation in the marketplace (what I call status actions) can actors affect the social definition of what are perceived as high-quality products. The results also provide evidence of two major market outcomes: conservativism and centralization. The findings improve the understanding of the evolution and dynamics of status in uncertain markets from both micro- and macro-sociology of markets. These processes are illustrated in a study of the art market in Israel.
Recommender systems use data on past user preferences to predict possible future likes and interests. A key challenge is that while the most useful individual recommendations are to be found among diverse niche objects, the most reliably accurate results are obtained by methods that recommend objects based on user or object similarity. In this paper we introduce a new algorithm specifically to address the challenge of diversity and show how it can be used to resolve this apparent dilemma when combined in an elegant hybrid with an accuracy-focused algorithm. By tuning the hybrid appropriately we are able to obtain, without relying on any semantic or context-specific information, simultaneous gains in both accuracy and diversity of recommendations.
We draw on an in-depth longitudinal analysis of conflict over harvesting practices and decision authority in the British Columbia coastal forest industry to understand the role of institutional work in the transformation of organizational fields. We examine the work of actors to create, maintain, and disrupt the practices that are considered legitimate within a field (practice work) and the boundaries between sets of individuals and groups (boundary work), and the interplay of these two forms of institutional work in effecting change. We find that actors' boundary work and practice work operate in recursive configurations that underpin cycles of institutional innovation, conflict, stability, and restabilization. We also find that transitions between these cycles are triggered by combinations of three conditions: (1) the state of the boundaries, (2) the state of practices, and (3) the existence of actors with the capacity to undertake the boundary and practice work of a different institutional process. These findings contribute to untangling the paradox of embedded agency—how those subject to the institutions in a field can effect changes in them. We also contribute to an understanding of the processes and mechanisms that drive changes in the institutional lifecycle.
Over the past decade, researchers have become increasingly interested in the theoretical and practical issue of governance as it relates to information and communication technologies. However, while the field has grown with the proliferation and use of such technologies, its scope and focus are far from clear: what counts as governance in settings, in which people increasingly interact through networked digital media? How can we think about interaction, coordination and control in these environments? What is the role of technologies in creating and maintaining regimes of governance? And what methodologies and methods are appropriate for understanding them? This paper draws on an interdisciplinary workshop held at Oxford University to have a closer look at some of these issues. It suggests that a key to understanding the heterogeneity of workshop contributions is to attend to the performativity of governance and governance research, the analytic status of ‘technology’ and the conceptual and methodological devices we use to research