IRB Highlights Standardized and Effective Metrics Model
Collect performance data to share
As IRBs and research programs increasingly seek IRBs of record and form reliance agreements, they will need to know whom to trust.
IRBs also need their own performance data to share with sponsors, researchers, and others. The challenge is developing metrics that work and can be used by other IRBs for benchmarking purposes. At least one IRB has found a possible solution.
“This has been an evolution of trying to look at metrics of my own IRB as we venture into single IRB territory,” says Ann Johnson, PhD, MPH, CIP, IRB and human research protection program (HRPP) director at the University of Utah.
“It’s become clear that all speak an IRB language, but our metrics have not,” Johnson says. “Unless we have the same metrics language that they’re using, it will be challenging to compare apples to apples between IRBs.”
Johnson published a poster about a new model for recording IRB metrics at the 2019 Advancing Ethical Research Conference of Public Responsibility in Medicine and Research in November 2019. (The poster can be found at: https://irb.utah.edu/_resources/documents/pdf/2019%20Poster%20-%20Standardized%20IRB%20Metrics%20A%20Johnson2.pdf.)
“This poster came from the idea that even though IRB review processes can vary across institutions, we all perform the same IRB steps,” Johnson explains. “We may have different names for them; we may throw in an extra step here and there that someone else doesn’t, but we all follow the same pattern.”
The standards metrics model measures the time from IRB submission to IRB approval by breaking it down into smaller parts for each activity.
“This is a metric that people care about,” Johnson notes. “What I basically did was take that one big space of time and broke it into two parts: the pre-review time, which a lot of IRBs do, and the review time,” she says.
The pre-review time is further broken down into these three parts:
- time during pre-review spent with the IRB office;
- time during pre-review spent with the investigator;
- time during pre-review spent with others.
The review time includes these three parts:
- time during review spent with the IRB office and members;
- time during review spent with the investigator;
- time during review spent with others.
“As an IRB, we like to point out to people that investigators can complain it took this long to get IRB approval, but the time spent wasn’t all by the IRB,” Johnson says. “I might have sent the study back for revisions, and [researchers] sat on it for three weeks.”
This metrics tool helps IRBs gain a more accurate picture of how their own time is spent vs. time investigators spend on answering the IRB’s questions and concerns.
“There’s a certain amount of defensiveness that IRBs have because we get blamed for how long we took. That make us upset — especially if we do a good job,” Johnson says. “We want to show where the time was spent. That’s how my model works.”
For example, IRB staff may go back and forth with the study team two to four times on revisions. To accurately reflect the amount of time the study was in the IRB staff’s hands, the metrics should show how much time it was in the IRB office vs. in the researcher’s office, she explains.
“We’re counting how much time it’s in each of our hands. We add that all up, and it becomes one big time bucket,” Johnson says.
For years, Johnson struggled with this part of metrics and how to compare what the University of Utah IRB did compared with other IRBs.
“When comparing my IRB process with other IRBs, the others might say, ‘We wouldn’t send it back to the principal investigator then,’” Johnson says. “But if you put the time in the buckets, it doesn’t matter how many back-and-forths there are; it keeps a tally over time. We’re all comparing apples to apples.”
IRBs can assess how their time buckets average out to pinpoint where they are spending the most time. If the time spent on a particular process seems excessive, they can use the data to develop a quality improvement plan.
“This can have a lot of impact on how we make decisions about improving processes,” she says.
For example, the IRB found its pre-review process took a long time, and investigated the cause. “At first, this concerned us,” Johnson says. “Then, we looked at the review bucket and saw things were speeding through that bucket.”
It was clear the IRB office was spending a lot of time in the pre-review process to make sure everything was correct. Then, when the study went to the full board for review, most minor details were resolved and the board could make a faster decision, she adds.
“People like that we get everything cleaned up in the beginning, and it sails through the board review,” Johnson says. “The board members were getting less frustrated because fewer things needed to be tabled. That was something we discovered by comparing the two sides of the buckets.”
Another way IRBs can use the metrics is by looking at investigators’ time buckets. They can compare the amount of time they take between different types of studies.
“For expedited studies, we find investigators generally get their things turned around quickly,” Johnson says. “For convened board reviews, we find it takes them a lot longer. We don’t know the reason for this.”
It makes sense if investigators take longer with convened board review studies because of their complexity. But, it also could be because the IRB is not doing a good enough job of asking for revisions and is confusing researchers, she notes.
“If that’s the reason, then it’s something we can fix,” Johnson says. “We’re just taking a look into how that’s happening.”
With data, IRBs can help investigators reduce the amount of time they spend making changes after IRB review or pre-review, she adds.
“That’s what we’re looking at right now,” Johnson says. “We’re ensuring our revisions are well-written, and we have reminder systems beyond what we already have.”
As IRBs and research programs increasingly seek IRBs of record and form reliance agreements, they will need to know whom to trust. IRBs also need their own performance data to share with sponsors, researchers, and others. The challenge is developing metrics that work and can be used by other IRBs for benchmarking purposes.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.