Artikel und Meinungen

Die Artikel und Positionspapiere auf dieser Seite reflektieren Erfahrungen und Einschätzungen wider, die sich über viele Jahre in der Zusammenarbeit mit internationalen Kollegen im Rahmen diverser Projekte entwickelt haben. Im Original finden sich diese Artikel auch auf meinem LinkedIn Account


Artificial Intelligence and Natural Stupidity

Digitization is probably one of those words most frequently used when managers are asked about the future of their corresponding corporation. This is consistent across quite different industries from financial institutions, biopharmaceuticals to consumer products and even farming. There is this universal believe that this is the route to go and hardly anyone would dare to openly disagree with it. When you get into details though, and you ask a group of four people about their interpretation and understanding of what "digitization" means, you may easily end up with four, quite different definitions of it. ... More
Similar results may be obtained if you discuss related buzzwords like "Blockchain", "Big Data", "eHealth" or "Artificial Intelligence". The problem with these discussions is that they typically start at the wrong end. Rather than identifying issues to be solved we talk about a technology we want to get implemented. It's usual the middle management layer within - not only pharmaceutical - companies who are in charge of demystifying the buzzwords. They are tied between the operational hurdles in their day-to-day life and the upper management requests to become more "digital" or more "agile". As an example, it may be much more important to accelerate study start-up or to figure out how to get faster to well performing clinical sites. And yes, for some of these issues there might be a technology, when properly applied, well understood and embedded in proper processes will help to mitigate the problem. To get this communicated to the appropriate management level is a tough challenge for every manager. The bad news though is, that the challenge doesn't stop here. Managing the expectations about a certain project can be similarly challenging. In particular if the project has anything to do with Artificial Intelligence (AI). This is where visibility and management expectations are extraordinarily high. Unrealistic expectations are quite typical because "everyone else is already doing it" and there are always some kind of "break-through" examples. Most of the AI methods which check for patterns in data and predict future outcomes are 30 years old. With today's computer processor power and the ability to access a whole universe of data these methods experience a renaissance and of course lead to impressing results. On the other hand people like Alison Gopnik don't get tired to describe the limitations of AI. Alison Gopnik is a professor for psychology and philosophy at Berkley. Her research focuses on how young children learn about the world around them. She has wonderful examples comparing children's learning approach with AI. Interestingly, for some of the problems four to five years old kids are learning much better than any currently available AI. Going back to the manager from the middle management layer: it is a tough job to fulfill the upper management expectations and at the same time doing the "real work" which includes the demystifying of buzzwords and translate their essentials into operational chunks which could be of value to the organization. This requires to overcome the - as Gopnik calls it - "Natural Stupidity": the semantic opposite of AI. At this stage most algorithms (or their outcomes) require some human review and assessment. There are excellent examples from Risked Based Monitoring (RBM) where a risk indicator is an indicator, but nothing more and therefore needs some human assessment what to do with it, if it triggers an alarm. What happens when we don't leave the Natural Stupidity behind us and if we just rely on the algorithms only, I experienced when doing some "curated shopping". This concept targets lazy male shoppers - who I am. All you need to do is to register online, provide your body measurements, and identify your style by clicking on corresponding pictures. You will then get a box of fashion goods (selection of everything from jackets, to shirts, t-shirts, socks and shoes) delivered to your home at regular (configurable) intervals. What I really like about this concept is that you don't have to get to shop, you avoid the crowds on the weekends, you have all the time you want to try and there is no obligation to buy anything. Instead, everything that you don't like can be returned at no cost. When it comes to the things the algorithm has selected for you to the trouble starts. I liked the beige pullover and the black slim jeans when I received it the first time. But why would I buy the very same item again just less than a year after I bought it the first time? To me this is just a minor and funny nuisance. In the context of clinical development these examples may help to illustrate the need for experienced and knowledgeable people who use their natural intelligence to manage the results of AI or algorithms.

Modern Clinical Data Management: Fear, Uncertainty and Doubt

According to Wikipedia, Fear, Uncertainty and Doubt (FUD) is "a strategy to influence perception by disseminating negative and dubious or false information and a manifestation of the appeal to fear". This phrase dates to at least the early 20th century. How does this relate to the present common usage of disinformation related to software, hardware and technology in the biopharma industry?...More
Well it is actually closer than you would think, so let's have a look at some data management related issues, which are not new but obviously also not solved either. There are quite a number of scientific evaluations which examine the impact of "wrong" data on study results. Whether this is because Source Data Verification (SDV) didn't catch it or because edit checks were insufficient - all these studies show high robustness of collected data against errors. Nevertheless, we are all afraid of missing or erroneous data (Fear), we are uncertain about the number and kind of edit checks we need (Uncertainty) and we doubt that our monitoring and data management forces are doing enough to provide us with data quality which will be matching regulatory expectations (Doubt). It seems that there isn't much motivation to change the situation. CROs have sponsors pay for "intensive" data checking, sponsors want to be "on the safe side" and technology providers try to sell some innovations which claim to solve everything. Meanwhile, we are trying to use new clinical research information technologies without taking into account the impact this may have for the overall data flow and corresponding processes. What started with EDC (applying paper processes in an electronic world) continues with eCOA and mHealth approaches. In a recent client example, we observed five different data sources, multiple technology providers plus a Data Management / Statistics CRO who was in charge to transfer, consolidate, clean, transform and eventually analyze the data. Primary endpoint data was coming from an eCOA device via a symptom score. Of course the sponsor was concerned about the completeness, the consistency and the overall quality of the data. Despite the fact that the eCOA data was transferred to the database at the eCOA vendor instantly, the transfer of eCOA data to the DM CRO didn't happen until after 2 months of study start, and at that time 80% of the overall data had already been collected. Similarly, uncovering errors within the eCOA system or device handling issues by sites and patients could only be discovered relatively late in the process, after data transfer. This leads to the ongoing and critical questions all sponsors have about proper data flow, processes and the single point of truth for all data. This is becoming more complicated as the number of different data source keeps increasing. Does the technology solution need to be one single big repository? This topic has been discussed for many years, with quite different answers coming from technology providers (or CROs) on the one hand, and company-specific solutions among sponsors. To define and consistently execute proper processes for these scenarios is at least as important as the technology itself. This requires people with technical understanding, flexibility and process thinking. The challenge will be to identify, develop, recruit and maintain these people who will do the work in this evolving environment. Eventually, the combination of people, processes and technology (in this order!) can make a difference. It will enable us to gain better efficiency and could be the route to guide us from FUD to CCT: Confidence (we do it right!), Certainty (we do it in the most appropriate way) and Trust (we do it with the best people).

Your vendor oversight probably needs improving

It is interesting that the CRO industry has grown to many billions of dollars per year in revenue, but its biopharmaceutical customers have done so little to prepare themselves to manage well the activities done on their behalf... More
Those for whom these outsourced services are being performed– the clinical development operations managers, the research physicians, the data managers – have usually never been trained in the oversight of outsourced tasks, how to evaluate potential service providers, or how to use them efficiently. This situation invites inefficiency, not only in dollar terms, but scientifically, in terms of access to and retention of proprietary data, real-time knowledge of status and performance, and the ability to assess and reflect on development program progress and strategy. Oversight of outsourced providers is mandated by regulation. The most that biopharmas have done usually is to designate contracting/purchasing department to do the oversight – a department with little or no clinical development professional background. This separation of oversight from those with the greatest internal knowledge, and greatest vested interest in the performance, is a common but damaging error. As recent highly publicized reports have emphasized, the use of CROs has not reduced cost or shortened timelines. Indeed, at the executive level, performance is not the concern, but rather only the trading of fixed costs for variable ones. Some would call this a cynical bargain, or at least a cold financial choice. This is not unique to biopharma, but perhaps it is more concerning considering the importance of our work to human health. The Contract Will Fix Everything In theory the contract between a vendor (CRO or software provider) and a sponsor should be the reference for both sides in case of doubts. In reality the contract complicates and obstructs vendor oversight management. The lack of domain knowledge in the contracting department, and the lack of time for input from the in-house experts, increases the risk of suboptimal contract issues. These will not get noticed until execution. For instance, if the in-house experts have not been asked for their advice on payment triggers (or did not looked at them), you can frequently find payments for the wrong items (e.g., query resolution, per visit, per data check, etc.) and high costs for rather repetitive tasks (for instance, programming of tables, listings, and figures). Overall it seems that too often there is a high tolerance for failed milestones anyway, as eventually the sponsors just “want to get things done”. As the contracts department gets more powerful, their natural „legalistic“ and procedural focus can alienate providers and create extended delays, especially when dealing with large providers with robust legal departments themselves. Indeed, study startup times are growing as studies in complex therapeutic areas increasingly rely on hospitals which have their own efficiency issues. Meanwhile, one of the long outstanding controversies has not been resolved – should or can sponsors write incentives and penalties into their outsourcing contracts? If so, how should they be written and executed? The Mismatched Business Goals The most overarching issue in vendor management is the operational mismatch between service providers and their customers. There is the biopharma on one side – a complex organization with its multi-million dollar projects and with the target to bring their medical innovations as quickly as possible to market. And on the other side there are the CROs, which are more organized like a “unit of work” factory, for which (as publicly traded service companies or private equity driven enterprises) quarterly cashflow goals are the most important. At the base level, sponsors have drugs to develop (at very high cost and risk of failure); service providers on the other hand simply have bodies to keep busy. Indeed, efficiency is not an inherently desirable goal for a service provider whose contracts are time-based. This is not about company size, since some CROs are bigger than many biopharmas. It is more about the incompatibilities of company cultures. Changes of priorities, project success and project failures are common when working for a sponsor. Biopharmas have worked hard over the last 15 years to tear down intracompany silos and to attract and develop broadly qualified people. Vendors tend to lag behind in this regard. In practice this leads to significant expectation mismatches. Where the sponsor expects flexibility and a solution-focused approach, CROs often have a more formalistic, “one step after the other” approach, and silo thinking is much more pronounced than on the sponsor side. Eventually, this may become an important hurdle for collaboration. So the question should be how to jump the hurdle, rather than whose fault it is. Efficient CRO Oversight - Where To Start? Depending on the status and preparedness of a clinical development organization, it will take time and effort to develop and implement a (new) vendor oversight strategy. So where to start? What to do first? The answer begins by doing your homework, i.e., understand your experiences‘ success and failures and the reasons for them, assess your typical contract language, and establish your medium- and long-term intended outsourcing strategies. This will lead to identifying areas for internal process optimization. If, for instance, your study start-up processes are not working well, this may become a roadblock for better vendor management. If a company’s electronic Case Report Form (eCRF) design is not optimal, this may lead to site dissatisfaction and unnecessary costs. There may be misalignment between a sponsor’s and the more „advanced“ state of your CRO’s standards and processes. The metrics which guide you to understanding and judging CRO performance may also need revision – most sponsors use too many metrics, and the data on which they are based is too out-of-date or inaccurate. A proper streamlined approach to defining and collecting those metrics will inform all steps of the process, from original project design to contracting to oversight and learning. Ultimately, all aspects of the sponor-CRO collaboration should be captured in a vendor oversight plan. This plan will be the guidance for everyone, and together with your completed homework, this should give you a good start into knowing what needs to be done for you to achieve proper risk-based vendor oversight. Successful CRO Oversight The guiding principle for a healthy sponsor/vendor relationship is that the sponsor (big or small, experienced or naïve) should govern the relationship with its service providers. Although most vendors will insist they are your „partners”, in fact managing sponsor/provider communication and control from a position of accepted authority is key for the biopharma sponsor. Collaboration should never mean abdication: the authority needs to be natural and fact-based. In our experience clinical development organizations are typically not prepared and not staffed to set up a successful vendor oversight management strategy. Instead sponsors jump to another vendor or another outsourcing model. There is never time or money to set up something sustainable from scratch. In consequence the learning from suboptimal experiences is never applied or considered relevant. This creates a succession of suboptimal experiences, which all clinical development organizations can no longer afford.