Some impressions of the inaugural QUEX Symposium

QUEX is the University of Queensland and University of Exeter Institute for Global Sustainability and Wellbeing. It is a partnership between the two universities with three themes – Healthy Ageing, Environmental Sustainability, and Physical Activity and Nutrition – that has joint PhD studentships as a central element. Each year ten studentships are available and these will lead to jointly awarded PhDs from the two universities; other funding streams are available to research, teaching, and professional services staff in the two universities to develop and support joint projects and visits.

The first cohort of students started in January 2018 and the first symposium was held last week in Exeter. I attended in my role as Exeter theme lead for the Health Ageing Theme.

At the symposium I was struck by three things I thought were three great about QUEX:

First, the students: they are all very impressive. QUEX students spend two years at one of the universities and one year at the other, usually sandwiched in the middle. The students are getting a very international experience and are themselves from all over the world: we have (from memory) three people from Britain, one from Australia, one from Fiji, two from Portugal, one from the US, one from New Zealand, and one from Japan. There were over 700 applications for the studentships so everyone who is in this cohort has had to really stand out and I thought they did: each of them is a great ambassador for QUEX and I was impressed by how good they are.

Second, the opportunities created. One motivation for the formation of QUEX was the observation that papers on which researchers from both Exeter and Queensland were both authors were cited more than papers in which only one university was involved. I would see this as in part coming from the fact that dual-institution papers are able to tap into more than one national network of citing (an example of the strength of weak ties, in social network terms). The advantage to students and to research teams is the same: they are able to tap into networks of scientists and knowledge that are likely relatively unconnected and so they will automatically extend the reach and influence of their work simply by virtue of being part of QUEX.

Third, this kind of international PhD arrangement seems like one version of the future of postgraduate study. I’m going to avoid using the phrase “in our increasingly globalized world” but it is the case that people who can work comfortably and well in different settings have more avenues open to them and, as researchers, more opportunities to join excellent research groups and to address problems that are global in nature: problems like this associated with the QUEX research themes. Going beyond shared supervision or short-term visits, arrangements like QUEX permit people to gain extended experience in more than one research (and cultural) setting. The presence of, and talks given by, the VCs of both universities at this symposium indicates that university leaders also see the importance of such undertakings.

I was also struck by three challenges that programmes like QUEX face:

The first is around communication. With students and supervisors spread across multiple campuses in universities thousands of miles apart, excellent internal and external communication are essential to creating and maintaining an esprit de corps as well as simply to ensuring the smooth running of the programme and ensuring everyone is up to date with progress and opportunities. Some dedicated comms resource is important to making this happen.

The second challenge is around sustaining things: this applies to QUEX itself, which is currently dependent on funding from both universities, and also to the careers of the researchers involved. One difficulty is that a lot of funding can’t be used to pay for people from other countries; for example, most research council (government) funding in the UK can’t be used to pay for the time of researchers based in Australia. Doing this kind of cross-national research runs the risk of being a bit like doing interdisciplinary research: everybody talks it up and it’s clearly worthwhile but funders and journals and assessment panels are all set up around single disciplines and you risk falling between stools. Part of the ongoing behind-the-scenes work in QUEX is going to involve identifying fellowships and funding schemes that support international collaboration.

Finally, there is a challenge around environmental sustainability. Flying researchers and other university staff around the world is important and exciting but, as far as I know, flying is one of the worst things you can do for the environment. I think we all have to work out how we can do this kind of work without screwing the planet up. In a programme that has environmental sustainability as one of its themes we would be troubling if we were not to address this.

As a programme, QUEX is going to go from strength to strength. At the time of writing, the second cohort of studentships is being advertised (see You can read more about QUEX here:

Understanding the rollercoaster of contextual influences on the spread of practices


Within our Implementation Science Group at PenCLAHRC, we’ve been exploring how evidence-based practices implemented in one hospital can be spread to other departments or hospitals in the South West of England. In theory, this sounds simple. Why wouldn’t other places make improvements to care found to work well elsewhere? In actual practice, as quite often can be the case, the experience was not quite so straightforward. So, we sought to study in real-time how context can critically influence the spread of practices – helping in some places and hindering in others.

Working with the South West Academic Health Science Network (SW AHSN), we studied, using qualitative methods, two collaborative projects seeking to spread improvements to healthcare in acute settings: the emergency treatment of acute ischaemic stroke (also involving the South West Cardiovascular Strategic Clinical Network) and the implementation of Patient-Initiated Clinics. We wanted to understand how the differences in context within six hospitals (for acute ischaemic stroke) and three departments in one hospital (for Patient-Initiated Clinics) influenced progress in the spread of these particular practices in healthcare.

To help us explore the influence of context we were informed by one of the many frameworks available in the field of implementation science: the Consolidated Framework for Implementation Science (Damschroder et al 2009). This offered us a taxonomy to develop insights into the factors present and influencing, positively or negatively, the spread of these evidence-based practices. Three of the domains were particularly useful for considering contextual factors: the outer setting, inner setting and the characteristics of individuals.

For the spread of stroke treatment improvements and the Patient-Initiated Clinics intervention, our analysis highlighted the following important contextual influences.

At the macro-level of context (e.g. focus on patients, national and regional influences) we found these influences playing a role:

  • Whether there was competitive pressure to improve as other organisations/teams had already done so.
  • Whether patients’ needs were known, and prioritised, and if implementing the new practices was viewed as beneficially meeting their needs.
  • An absence of national and regional incentives and policies to drive/support uptake.

We found the most influential factors occurred at the meso-level context (e.g. organisation, department and team) and included:

  • Whether the change to practice was a viewed as a priority or strongly needed by each organisation/department.
  • How ready for change was each organisation or department. Particularly if there were skilled and engaged leaders, a stable team, and the available resources (i.e. money, space, and time).
  • Whether, in each setting, could attract, involve and engage key individuals and locate a champion to act as a catalyst/driver for spreading the practice locally.
  • An ability to be flexible (i.e. about timescales and approach) and persist in response to barriers encountered within each organisation/department.

And we found these influences at the micro-level of context (e.g. key individuals within a setting):

  • What key individuals know of and believe about an improvement and intervention.
  • The perceived value key individuals placed on making a change to current practice.

From our work, we generated lessons in the form of questions. These may help to identify contextual factors that could influence future efforts to spread evidence-based practices. We note a need to consider how differing contextual factors and levels interact with each other.

We also identified other factors that influenced uptake. How key people viewed the strength of evidence underpinning the change, and the benefit of external support from university-researchers and, for the stroke project, a quality improvement manager. We identified some challenges too. For example, how to ensure the sustainability of improvements and interventions over time so they become routine healthcare practice.

One of our main lessons was for those involved in spreading these practices to be prepared for hard work and expect the unexpected. So, it’s a bit like being on a rollercoaster! This makes sense when we consider each healthcare setting will have their own demands and complexities that impact on the uptake of improvements and interventions. We will be working with the SW AHSN on how we can build on these lessons to influence future efforts to spread practice. To read more about our study, here’s our project page.


Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science. 2009;4:50. doi:10.1186/1748-5908-4-50

Pfadenhauer L, Rohwer A, Burns J, Booth A, Lysdahl KB, Hofmann B, Gerhardus A, Mozygemba K, Tummers M, Wahlster P, Rehfuess E. Guidance for the Assessment of Context and Implementation in Health Technology Assessments (HTA) and Systematic Reviews of Complex Interventions: The Context and Implementation of Complex Interventions (CICI) Framework [Online]. 2016.

The Health Foundation. Perspectives on context: a selection of essays considering the role of context in successful quality improvement. London: The Health Foundation. 2014.

Strong the Force is……

A premier for Star Wars last week and my Premier Blog this week, I’m not sure which is more exciting…

This post follows on from the theme of our last group blog, which reflected on a discussion during our reading group with the University of Western Ontario in Canada. This focused on the current status of frameworks, theories and models in implementation science.

I’m relatively new to the field of implementation science: my background is in systems-science research. However, I’ve recently had experience in using the integrated-Promoting Action on Research Implementation in Health Services (i-PARIHS) framework framework to inform the evaluation of a capacity-building programme. The programme relied heavily upon the facilitation of academic mentors to educate and support healthcare analysts to develop modelling skills over a 12-month period. The framework proved useful in explaining the data I’d captured but my nagging thought was: if I’d used the framework prospectively, would this have further enabled the healthcare projects, and could the healthcare analysts use the framework themselves to support their proposals for change?

A colleague flagged up the QUERI) Implementation Network which recently hosted a one hour online seminar on the framework. This was delivered very well by Jeffrey Smith and I really valued feeling part of a conversation within a community much larger than my normal habitat.

He reflected on the differences between the original PARIHS framework (Kitson et al 2008) and the ‘integrated’ PARIHS (Harvey and Kitson 2016). This is predominantly related to the construct of ‘facilitation’. The construct of ‘facilitation’ is now considered to be the most influential construct relative to successful implementation. This is reflected in their equation:

Successful Implementation (SI) = Facilitation (Innovation x Recipient x Context)

Jeffrey touched on the prospective use of i-PARIHS and acknowledged there was limited evidence of its use in this way.

During my induction period into the literature and field of implementation science it strikes me that the force to create more frameworks is strong, whereas the testing and use of those already in existence, for different purposes and in different contexts, seems less attractive or perhaps achievable.

This seems a familiar academic situation, and one I recognise from previous work with another slippery multi-dimensional concept! So my question is: why as researchers do we not seek to test conceptual frameworks through a more applied form of research? The methodology to develop a new tool or measure requires research to demonstrate their sensitivity and validity for an intended context. Can implementation frameworks be viewed in this light, to become validated tools which can support frontline healthcare staff to design and implement change? Should we be aiming to demonstrate retrospective and prospective validity and sensitivity of only a few frameworks to enable their applied use in the field by non-implementation scientists?

Listening to the seminar, it seems that considerable efforts have been made to develop i-PARIHS as a toolkit to enable users to understand the facilitation process and role in a practical and pragmatic sense (Harvey and Kitson 2016). The supplementary material to the Harvey and Kitson 2016 paper also illustrates the difference between the concept of facilitation as a role and process at the macro, meso, and micro level of the system. This approach, which takes the academic insights and creates a practical toolkit, seems admirable in this vast field of implementation science literature and evidence. I’m unsure at this stage how implementation science informs the transfer of its own body of knowledge. Does co-design or user-centred design principles inform the development of implementation tools and frameworks and are these intended to be used by non-academics? If prospective use of these frameworks is one of the objectives then end user engagement and evaluation of carefully selected aims would seem essential to demonstrate the usability and validity of a framework. This is easy to write, but in reality perhaps much harder to achieve and I do not underestimate the challenge.

In summary, I have two nagging questions:

  1. Do or should implementation science frameworks practically help those at the frontline implement change more successfully?
  2. Should research efforts be focused towards evaluating implementation science frameworks to support the development of implementation tools for the non-academic audience?


Kitson, A. L., Rycroft-Malone, J,Harvey, G, McCormack, B, Seers, K, Titchen, A. (2008). “Evaluating the successful implementation of evidence into practice using the PARiHS framework: theoretical and practical challenges.” Implementation Science 3(1): 1.

Harvey, G. and A. Kitson (2016). “PARIHS revisited: from heuristic to integrated framework for the successful implementation of knowledge into practice.” Implementation Science 11(1): 33.

Is this thing on? Cultivating K-star communication

To mark the 2 year anniversary since the last post (but who’s counting?), we are resurrecting this LKD blog, much to the delight of our one and only commenter (Hi Mark!).

Okay, enough self-deprecation. I am genuinely excited about the new content that will be coming to this blog in the near future, courtesy of PenCLAHRC’s very own Implementation Science Posse™.*

Our group has certainly grown in the last couple of years, bringing in a diversity of experiences, expertise, interests, and perspectives about all things K-star and implementation science. La posse de mosaïque,** if you will.

One platform to facilitate exchange of these diverse perspectives that we have been trialling is a K-star reading group – a club of sorts. It’s just like fight club.

Here are the rules:

The first rule of Fight Reading Club is: You do not talk about Fight Club talk about it with anyone who will listen.

The second rule of Fight Reading Club is: You do not talk about Fight Club.try to convince people to listen.

Third rule of Fight Reading Club: Someone yells stop, goes limp, taps out, the fight is over. sorry, but you continue until the allotted time runs out. Having a coffee before the meeting can help avoid this.

Fourth rule: Only two guys to a fight The more the merrier.

Our first reading group meetings earlier in the year have spurred interesting conversations within our team. As someone who has had to get re-immersed in the literature after some time away, creating the time to read, critically reflect, and debate recent developments in the field with colleagues has been very helpful. I find these discussions are important for me to situate my own perspectives and where I stand in related to current discourses.

In the spirit of further broadening these conversations and fostering the relationships we have with researchers at the University of Western Ontario in Canada, we held a joint reading group with Dr. Anita Kothari, Dr. Shannon Sibbald and their students.

Commemorating our UK-Canada ties with maple leaf shaped burgers.

The discussion proved to be an interesting reflection of the recently redeveloped i-PARIHS framework. Overall, the discussion raised a number of questions and issues, not just of this iteration of PARIHS, but also of the overall state of play of frameworks, theories, and models in IS. With regard to i-PARIHS, there were questions as to whether any substantial contributions were made. Some expected more in terms of providing guidance for operationalization, although it was also pointed out that this was perhaps the theoretical piece of the work, with the operationalization paper to come in the future. Still, others questioned the increased focus on facilitation. An issue raised was that although facilitation is undoubtedly an important part of implementation and its emphasis in the i-PARIHS was an overall positive thing, was the discussion about facilitation left too thin to have a meaningful impact? It was pointed out that much discussion about facilitation reflected concepts in the knowledge brokering literature that were perhaps referred to in this paper with slightly different terms. What was being missed by not referring to and acknowledging the longer history of these discourses in that body of work? There were also a number of questions raised as to other characteristics of facilitators that were not mentioned in the paper. For instance, the facilitators’ position and power to create change, as well as their relationship with the organization (e.g., internal or external) plays an important role in implementation; different characteristics are needed depending on the scale of change that needs to happen. In all, our reading group on the i-PARIHS highlighted some initial points of deliberation and reflection, in line with the authors’request for feedback from the IS community.

An interesting question that was asked – and remains unanswered – relates to why certain implementation science frameworks seem to be more popular in one country versus another. For instance, our Canadian colleagues pointed out that the PARIHS framework is frequently used and mentioned in many Canadian studies, which is certainly not the case in England. One possible explanation offered is the relative accessibility of PARIHS for those who may be new or not well-versed in the field. For instance, in comparing PARIHS with another implementation science model like CFIR, it does not take long to realize that both the language used and the simplicity of the constructs in PARIHS is much more accessible. Another possibility is that one of the critiques of PARIHS as being applied in diverse ways is also a strength: its flexibility allowed for people to use it in a way that was in line with their understanding of the framework, whatever those may be.

Nevertheless, the question remained as to what gives rise to the ubiquity of one theory/model/framework over another in different settings. It was great to be able to have these international(!) conversations with our colleagues. I see these shared reading group discussions as opportunities be cognizant of the trends and developments in the field in these different countries. How do we share and learn from each other? How do we support each other’s work as we advocate for the continued investment in implementation science research in both countries? Alas, these are the big questions our humble reading group will take the baby steps to address.

*No one gave me permission to call us this. Fun fact: We very nearly called our team “Knowledge for Change!” (KFC), but decided against it due to the whole WWF vs. WWF precedent.

**This is probably grammatically incorrect. I’ve mostly forgotten my Canadian French classes.

Implementation barriers in dementia care: the Machine Trick

My colleague Jo Thompson Coon and I were invited to attend Alzheimer’s Society Annual Research Conference last week and give a workshop on implementation in dementia research – a topic in which we’ve a particular interest and on which the Society is funding us to do some work.


We ran the session based around a shortened version of Howie Becker’s “Machine Trick”; the trick involves coming to understand a social problem better by imagining that you have to design a machine that would produce the situation you have observed: in this case, the failures of knowledge mobilisation around dementia research and the practice of dementia care. (A future post will go into more detail on Becker’s trick and how it can be used in implementation workshops.)

The workshop was attended by just over 40 people, a mix of researchers and research network volunteers – that is, the lay people who review and advise on Alzheimer’s Society research projects. After a brief introduction we challenged those in the room to split into small groups and identify the components that they thought our machine should have. Each of the six groups then fed back the three most important bits they had thought of, and a few people shouted out other things they though important at the end.

The components the workshop participants identified are listed below. Those that came up more than once are marked (“x2)”.

• Use of different languages by different parts of the machine x2
• Lack of understanding of barriers at outset of (research) project x2
• No patient and public involvement x2
• Lack of willingness to change or accept innovation
• Research not grounded in or exposed to reality
• Lack of leadership
• Poor quality research
• Lack of polish in presentation of findings to wider audience
• Reactive workplace with no time to plan – both for researchers and practitioners
• Poor communication and/or excessive communication between parties
• No use of experience or prior learning from previous work – for both researchers and practitioners
• Lack of appreciation of time it takes to evaluate something
• Lack of trust between parties
• Kudos and benefits only accrue to one side (typically researchers)
• Priorities geared towards immediate clinical care – lack of time and resources to think about research
• Funding geared towards finding out new stuff rather than implementing or disseminating – and no time in grants to think about implementation and dissemination
• Stifling of innovation and creativity – no time or attention available for new things
• Lack of understanding of politics – both with a small ‘p’ and a big ‘P’

These components are pretty clear and need little interpretation: the problems inherent in them, if we wanted to fix or correct the machine, are readily apparent. They also cover a lot of ground and capture some of the complications and complexities inherent in implementing healthcare research – and I use that term purposefully, since most if not all of them could be applied to many care situations, not just to dementia. In a longer workshop we might have gone on to explore how the challenges represented in the machine could be overcome or negotiated; as it was I think the format was useful in bringing researchers and non-researchers together to think about, discuss, and identify the challenges of implementation.

There are no heroes in knowledge mobilisation

As a child I never really had a hero. I’m reminded I should have had one, or at least that other people did, when I have to fill in “secret questions” for password recovery: alongside “mother’s maiden name” and “place of birth” you sometimes see “childhood hero”. I never had one.

Now I’m interested in implementation science and K* and I wonder: should I have a hero now? Or more broadly: who are the heroes of knowledge mobilisation?

So many heroes, so little time

There are lots of heroes in the world: footballers, actors, musicians. Some heroes are super: Superman, Silk Spectre, Jean Grey. And some heroes are real, and professional: in public health, Edward Jenner is known for pioneering the development of the smallpox vaccine, John Snow is famed for “removing the pump handle” as part of his investigations into the causes of cholera, and Louis Pasteur recognised for his work on vaccination and pasteurization.

Except… that if you read Bruno Latour’s book The Pasteurization of France (originally published as Les Microbes: guerre et paix) you find presented a different perspective on Pasteur’s work and legacy. Once you’ve read that it’s a lot harder to regard other heroes in quite the same way as you once did.

Alongside the apparently heroic scientific work of Pasteur, work which has led to his lasting fame and celebrity within France and elsewhere, Latour sets all the other work that was necessary for Pasteur’s activities to change the way people thought and acted. Some of this work was conducted by Pasteur himself and will be familiar to anyone working in knowledge mobilisation: the work of reasoning and convincing and persuading and enrolling and so on. He played an important part in enrolling the various “actors” (that is, the individuals and groups and organisations) necessary to the success of his work (and there is an interesting sociological understanding by which we may think not only of the people involved in this but in the non-human actors too: Michel Callon’s (1986) account of the role played by scallops in debates over the scientific and economic debates about conservation and fishing in St Brieuc Bay in Brittany is exemplary).

There is no issue, then, that what Pasteur did was anything less than very important and scientifically remarkable. But much of it was conducted by others who worked for or around or simply at the same time as Pasteur, who supported Pasteur for reasons that range from the altruistic to the self-interested, the pragmatic to the political, and who were medical practitioners or farmers or local politicians or industrialists or rival scientists or something else entirely.

In short: we talk of Pasteur’s work and of pasteurization but in doing so we focus only on the activities of the person apparently at the centre and neglect all the work that went on around them, work that not only supported and publicized Pasteur’s activities but in many ways enabled and constructed it.

Latour describes the complexity of what occurred and the importance of the network of alliances that led to the production of scientific results and the construction of what is science. In Art Worlds (1982) Howie Becker proposed that the answer to the question “what is art?” is to be found among the individuals and groups that collectively define, through their discourse and their actions, what is and what is not art. Latour here addresses how the question “what is science?”, or perhaps “what comes to be regarded as scientific?” can be answered; the answer lies among all the individuals and groups that have an interest (or can be made to be interested) in the scientificity of a given claim or set of claims or proposed action.

And in emphasizing the absence of a boundary between science and society Latour addresses the claims of (some) scientists that those engaged in social studies of science don’t really understand science. His response seems to be that those who argue this don’t really understand society and that insisting on science as an undertaking of pure reason neglects the important of force (or power) in the making of any claim to truth or the realisation of any change. Latour’s concern is thus not simply with Pasteur’s scientific achievement but, perhaps, with how the science came to be regarded as an achievement (a process which took many years) and how the achievement, constructed in this way, ultimately led to practical changes in human health in France and worldwide. If we regard Pasteur as a hero then we might also consider how he came to be regarded in that way and what was necessary for the establishment of that regard.


So are there heroes in K* and implementation science? Sure, if you want there to be: go ahead and pick some. But for my money the always-already collaborative and systems-based nature of implementation means that there thinking of individual heroes means ignoring the complex ways in which change really occurs and knowledge mobilisation actually takes place.


On the Battle of Poitiers (1356) as a failure of implementation

There were three noteworthy English victories over France in the Hundred Years War. The best-known is the final one, the Battle of Agincourt (1415), but the earlier battles are just as historically and strategically interesting.

The second of these was the Battle of Poitiers (1356) in which a combined English and Gascon army led by Edward, Prince of Wales (later known as the Black Prince) defeated a much larger French force. The French had a number of apparent advantages: they were on home territory, they had many more men (probably around 16000, twice the size of Edward’s force of around 8000), and they were eager to drive the English out of France because English forces had been at large for years and had pillaged and killed widely. Yet the French lost, and lost badly: their King, Jean II, was taken prisoner and the Oriflamme, the sacred French battle standard, was captured. The defeat was met with surprise across France and Europe and marked a turning point in the status and authority of the French nobility.

Historians have proposed a number of reasons for this unexpected loss. I think that our current understandings of implementation can be used to understand some of the failures of the French army. I suggest three implementation issues were involved and in relation to each we can see how Edward’s forces were successful in making beneficial changes that the French army failed to enact.Battle-poitiers(1356)

First, English longbowmen were made central to their army. Archers were an important contributor to the English victory at Poitiers, first firing upon the French cavalry head-on and then, when the knights’ armour proved too tough to penetrate, moving to one side and felling the horses with an attach on their flanks. The successful implementation here lay first in the English recognition of the power of the longbow and second in ensuring that the archers were effectively deployed in practice. The French also had archers and knew their power: they had suffered under the fire of English longbows in the Battle of Crécy ten years earlier. But they failed to integrate the archers into their fighting force as the English did, a failure that Barbara Tuchman ascribes to established social and cultural norms on the part of the French nobles: the French archers “were never properly combined in action with knights and men-at-arms, because French chivalry scorned to share its dominance of the field with commoners.” (153) In the English side this attitude was less dominant and they were able to benefit from the ranged power of the longbow.

A second factor played out in the tactics adopted by the French during the battle. The English force was very short of water and had dug in on a hill. Marshal Clermont, an experienced general and one of the senior French nobles present, proposed blockading the English and starving them out. Edward feared that the French would try this and the approach would likely have had an excellent chance of success. However, King Jean opposed the idea because it was at odds with the rules of chivalry. He chose instead to engage with the English and Clermont was among those killed in the fighting that followed.

Third, Edward had been able to organise his forces in a new way with some semblance of what we might recognise as a military hierarchy, with soldiers answerable to officers and officers to more senior commanders (this is not strictly true but captures the general idea). The French had no such structure and their commanders were at risk, as was often the case in medieval armies, from the fact that individual nobles and their followers might decide at any point that they had had enough and make a unilateral decision to leave the field of battle. With no notion of military discipline and troops’ loyalty in the first instance to their feudal overlord, the turning tide of the battle eventually led to a rout with surviving French nobles and foot soldiers fleeing before the rampaging English.

These were not the only things that contributed to the French defeat but they were important. The French lost, in part, because: their prevailing culture did not permit the effective implementation of a new technology (longbows); sociocultural factors prevented them from acting in a tactically beneficial way in reaction to the course of the battle; and they were tied to a harmful and outmoded organisational structure.

If we turn to contemporary writing on facilitators and barriers to implementation we find similar barriers to implementation recognised. Implementation science has been described as a discipline that focuses, in part, on “the discovery and identification of social, organizational, and cultural factors affecting the uptake of evidence-based practices and policies” (Luke 2012). The “evidence” available in the fourteenth century was not the type of evidence we might want to inform policy decisions today but it seems clear that social, organisational and cultural factors were key aspects of the French failure to implement new ways of thinking and acting that became major contributors to a French military disaster. Then, as now, social, organisational and cultural factors are central elements of what we have to recognise, consider, and address when considering implementing new practices or ways of doing things.

Plus ça change, plus c’est la même chose…


My understanding of this topic has been informed by Barbara Tuchman’s outstanding A Distant Mirror: The Calamitous 14th Century (Knopf, 1978).

Luke DA. Viewing Dissemination and Implementation Research through a Network Lens. In Brownson RC, Colditz GA, Proctor EK. Dissemination and Implementation Research in Health: Translating Science to Practice. (Oxford: OUP)


Senior Lecturer and Associate Dean at University of Exeter Medical School
I am interested in dementia, older people's health and wellbeing, and implementation science.

Why a health service is like a bicycle OR the importance of deimplementation

Much of the emphasis in knowledge mobilisation is on getting new things into practice. The term “implementation science” conveys this too: we want to implement stuff. But just as important can be getting things out of practice: de-implementation. The rationale is straightforward. A health service is much like a bicycle, and the case of Mrs Armitage makes clear the problem.

Mrs Armitage on her bike - so far, so good

Mrs Armitage is a creation of English writer and illustrator Quentin Blake. She appears in three books: Mrs Armitage on Wheels, Mrs Armitage and the Big Wave, and Mrs Armitage, Queen of the Road. In both Wheels and Big Wave Mrs Armitage engages in similar behaviour. She takes something that works – in the first case a bicycle, in the second case a surfboard – and, perceiving various shortcomings, adds to it until disaster threatens. (I’ll come back to Queen of the Road, which is different.) On the bicycle Mrs Armitage is concerned that hedgehogs won’t hear her coming so she adds a selection of motorhorns; worried that she may need tools in case of breakdown so adds a toolbox; alarmed that her dog Breakspear is tiring so adds a seat for him; and so on: a snack box, a radio-cassette player, a mouth organ… etc.

So it is, all too often, with our health services. New technologies, new ways of working, new diagnostics come along and, if they seem to work and we can implement them, we add them to what we provide. But we often don’t or can’t remove or reduce the form of care or technology that the new one was intended to replace. In other situations, things are implemented on the basis of little or no evidence, never challenged, and persist indefinitely as established practice. Like Mrs Armitage’s bike, the health service gets bigger, more expensive, more unwieldy.

Mrs Armitage's bike: overburdened, disaster beckons

And what happened to Mrs Armitage’s bike? Overburdened and out of control, it crashes and she and Breakspear find themselves sitting amongst the wreckage.

To avoid the looming possibility of expensive and unwieldy health services that risk crashing whole economies we need, rather than constantly thinking about implementing and adding, to think also about how to disimplement and take things away. Implementation Science published a useful short article on this by Vinay Prasad and John Ioannidis in which the authors set out a conceptual framework for evidence-based de-implementation and followed by a note from the editors stating they welcomed further contributions on de-implementation. De-implementation is not just the opposite of implementation and is likely to require different approaches and thoughtful ways to identify practices and technologies that should be de-implemented, then work to find strategies and techniques to de-implement and sustain the necessary changes.

Perhaps Quentin Blake thought about this too. In Queen of the Road Mrs Armitage begins with an antiquated car from which bits gradually fall off – hubcaps, roof, doors, and so on, always received by Mrs Armitage with a statement such as “Hubcaps? Who needs them?” – until she’s finally left with a stripped-down and efficient-looking roadster and annointed, by her uncle and his friendly biker friends, as Queen of the Road.

Hubcaps. Who needs them?

Getting to a stripped-down and efficient health service is the ultimate aim of both implementation and de-implementation. We just have to make sure we don’t forget the second part.



Prasad V, Ioannidis JPA. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014; 9: 1.

Cake-making and implementation

Becky wrote an interesting post recently about KM and cakes. She made the very good point that knowledge mobilisation isn’t something that happens at the end of the research process, like adding the icing to a cake, but that it is part and parcel of research. I thought this was a great metaphor and want to take it a little further.


Let’s stick with the idea that research is like making a cake. A certain understanding of research is that cake-makers (clever researchers) make a fantastic new cake (do some research), present the cake (do some dissemination, probably through the standard academic routes of peer-reviewed publication and conference presentations) to a room full of hungry people (an imagined audience of practitioners or clinicians or policy-makers or members of the public, etc.) and then the people consume it (it is taken up and becomes part of routine practice). Or at least that’s how some people seem to imagine it works.

More commonly, in my experience, the cake is made and then one of a number of things happen, including:

– lots of people hear about the cake and eat it (this doesn’t happen often)
– some people see the cake but think it’s for someone else so they don’t eat any
– some people hear about the cake and one or two people nibble the cake but they think it looks funny or smells funny or just plain don’t like the way it tastes
– a few people hear about the cake but there are so many cakes to choose from that they are distracted elsewhere
– a few people hear about the cake but already have a cake so don’t pay any more attention
– the cake sits in a cupboard for a while. Eventually it gets so mouldy nobody would ever eat it. Perhaps it’s sitting there still. (This happens a lot.)

So sometimes this approach works but very often it does not. A lot of the time this is because it’s simply the wrong cake.

It’s your birthday and someone brings along a wedding cake: wrong cake.
It’s breakfast time and someone brings along a rich chocolate cake: wrong cake.
Everyone’s asked to bring along a salad and you turn up with your delicious pineapple upside-down cake: wrong cake – in fact, wrong food altogether.

The cake need not be totally wrong – maybe you’re happy to eat wedding cake on your birthday or have chocolate cake for breakfast – but for a lot of people it will be. And so it goes with research, at least according to the model outlined above: something is prepared with limited or not understanding of the context in which it’s going to be used.

Wouldn’t it be better if we could avoid this problem? How about, instead of turning up with a cake and hoping the people in the room will like it, we speak to them beforehand and find out what kind of cake they would like? A birthday? Great – I can make you a birthday cake! Going even further, we can keep speaking to the people who will eat the cake throughout the process to find out what they need and have them contribute to the cake-making: How many people is the cake for? A little less sugar? How thick do you like your icing? You’re not going to produce the perfect cake but you’re going to come a heck of a lot closer to producing the kind of thing people want to eat than if you just turn up with your random cake.

Cake making. It’s an implementation thing. (The cake is not a lie.)

Metaphors we implement by

In their influential book Metaphors We Live By (2003[1980]), the cognitive linguists George Lakoff and Mark Johnson argue that metaphors are not just ways of communicating but that they shape the way we think about the world as well as how we act. In this understanding, metaphors are not just poetic devices or characteristics of colourful language but are pervasive in everyday life and as important to thought and action as to language.

They write: “The concepts that govern our thought are not just matters of the intellect. They also govern our everyday functioning, down to the most mundane details. Our concepts structure what we perceive, how we get around in the world, and how we relate to other people. Our conceptual system thus plays a central role in defining our everyday realities. If we are right in suggesting that our conceptual system is largely metaphorical, then the way we think, what we experience, and what we do every day is very much a matter of metaphor.” (p.4). As an initial example they refer to the conceptual metaphor “argument is war”, pointing out that is picked up in a wide variety of related expressions: She attacked the weak point in my argument, I shot down all his arguments, Your claims are indefensible, and so on.

In relation to knowledge mobilisation, Huw Davies and colleagues (2008) picked up on the implications of the terms “knowledge transfer” and “knowledge translation”. They suggest that “the metaphor invoked by these terms is, at best, one of gathering and integrating evidence from research, condensing this into convergent knowledge, and neatly packaging this knowledge for transfer elsewhere. More often, it simply implies the dissemination of relatively undigested findings from single studies. In other words, knowledge parcels for grateful recipients. Such a view belies the inherent and, we would argue, largely insurmountable challenges of doing so for any but the most simple and incontrovertible of findings. Moreover, if the challenges of delivering convergent knowledge are large, the subtlety and complexity of research use in context further militate against simple models of ‘translate and transfer’” (Davies et al. 2008: 189) We might think about the implications of some of the other metaphors used for this and related activities: knowledge utilisation, knowledge mobilisation, knowledge management, technology transfer, and so on. Each of these terms implies something about what the activity involves, and is limited to, and could be critiqued in a similar way: we’re simply making knowledge mobile (because we don’t want stationary knowledge?) or using it (which raises questions who is using it and to what end)?

If Lakoff and Johnson are correct then we should pay attention to what each of these metaphors implies. That each is limited is perhaps unavoidable and make partly explain why the KM field is split by different terminologies rather than united by an agreed-upon one. But we should go further and examine in more detail the consequences of these metaphors. If it is the case that they are not mere linguistic ornamentation but are orienting concepts that shape the way we think and act then we need to consider whether they are shaping our thought and actions in the most appropriate and productive ways.

A long chapter in the Sage Handbook of Organisation Studies is devoted to metaphors of organisational communication. It opens with a commentary on the growth in studies of organisational communication, a growth which has been accompanied by a shift from linear transmission within organisations to “the way that social interaction, discursive processes and symbolic meanings constitute organizations” (Putnam and Boys 2006: 541). To date there has been no similar investigation of and reflection upon the metaphors of KM and their consequences. We might ask a number of questions: what are the key metaphors of KM? How are they employed? How do they influence the expressions we use when talking about KM (in line with the expressions about argument mentioned above? How have they changed over the time, and what underlying changes in our practice and understanding do these changes reflect? And, most importantly, how do these metaphors constitute the things we do when we do KM?

A maturing science needs a degree of awareness and reflection of itself, of what it is and what it is not. KM would benefit from a greater sense of itself and understanding the metaphors we use in doing this work would be one way to approach this.



Davies H, Nutley S, Walter I. 2008. Why ‘knowledge transfer’ is misconceived for applied social research. Journal of Health Services Research & Policy. 15:188-190.

Lakoff G, Johnson M. 2003. Metaphors We Live By. (New ed.). Chicago: University of Chicago Press.

Putnam L, Boys S. 2006. Revisiting metaphors of organizational communication. in Clegg SR, Hardy C, Lawrence TB, Nord WR. The SAGE Handbook of Organisation Studies. London: Sage Publications. pp.541-577)