Big data is at every IRB’s fingertips — but can you use it?
You need to know what you don’t know
Research institutions increasingly are investing in electronic systems that have the capability of collecting and storing big data. The information can be used to improve IRB processes, identify systemic flaws or problems before they develop into a crisis, and to run a HRPP as efficiently as a major airport’s air traffic control.
But it takes more than software to achieve these outcomes. IRBs need to know how to use and obtain the most benefit from their electronic data-collecting systems.
“If you have electronic data mining tools, then you can use those tools to assess workflow applications and to make decisions about resource applications,” says Judith Marie Birk, JD, director at the University of Michigan Medical School Institutional Review Board (IRBMED) in Ann Arbor. IRBMED recently purchased a new electronic solution that provides an improved look at an IRB’s big picture.1
“When you have a robust tool, it can tell you a story about your metrics and also allow you to drill down very systematically,” Birk says.
For example, electronic data mining tools with internally consistent data analytics have the following capabilities:
• Look at metrics of studies reviewed by the full board or at an expedited review,
• drill down to department level to see which institutional areas require more resources,
• measure turnaround time at each step of the protocol submission process with real-time reporting and data visualization capability1,
• assess workflow issues, and
• examine resource allocation priorities.
“Everybody has data, but it’s hard to work with that data because reports tend to be one-off reports,” Birk says. “People will run a report today and then run it again in two weeks and compare.”
IRBMED has used its data technology when recruiting additional IRB reviewers from University of Michigan departments that have high research volume, she notes.
“It’s been beneficial to us to show a department why we need more members of the IRB,” she says. “We can say to the chairman of the department, ‘We would like you to give us another reviewer because you’re high volume.’”
If the chair answers that the IRB already received another member from that department, it’s time to pull out the data to show how much volume had increased in the previous 12 months, she adds.
“We can go across time and show volume and the type of submissions made to the IRB, year by year,” Birk says. “We can indicate the number of reviews we’ve assigned to each expert in a particular department and demonstrate that while we have only two reviewers, we really need four because they’re overloaded.”
Having robust data has given more credibility to the IRB’s request for more expert assistance. “We could have asked for more reviewers before, but it would have taken 10 times longer.”
With big data, the IRB has a global view of where workflow blocks occur and why.
A more robust tool can be fully integrated with IRB processes and provide instant comparisons at any set time periods. It also will provide user-friendly and interactive dashboards with graphic designs that make it easier to understand data trends.
“What our new system will do is let you set the parameters and then define how you want it broken down, whether it’s by month, years, or weeks,” Birk explains. “You can make a process change and then measure to look at its success.”
It’s also important to make certain the data technology ensures data validity, she says. The institution can do this by developing a set of reporting queries that electronically interrogate the IRB and human resources systems daily. This safeguards against human error and bias.1
Birk provides the following strategies for using data technology optimally:
• Determine your IRB’s priorities and analyze data related to the most important process first. “Every IRB is conscious of turnaround time,” Birk notes. “We want to optimize our workflows and have a quick turnaround for investigators; we want to be efficient and get materials back in the hands of investigators as soon as possible.”
An IRB’s data technology should have the capability of letting an IRB office examine the various steps in the process of protocol review so the review process can be optimized, she adds.
“One thing that’s been historically difficult to examine is the amount of time a study spends with the study team and IRB office and reviewers,” Birk says. “There typically are three big components, and there can be the fourth component for clinical research studies of any additional committees that have to review a study, like a radiation safety committee or research pharmacy.”
A data technology tool can take the application from the time it starts in the electronic system and examine it across its lifespan with ease. The IRB can then assign metrics and use the collected information to drive workflow and other decisions related to optimizing efficiency, she adds.
• Use data technology to address workflow and personnel issues. When the IRB office discovers a workflow problem identified in data, it can be discussed with staff and used to develop process improvements. “There might be perceptions that are not accurate, and this helps to eliminate perceptions,” Birk says.
Also, big data can highlight systemic or overall process problems, such as a work system that uses staff resources inefficiently.
“IRBs receive many types of submissions, including initial, adverse events, continuing review, and some IRBs have individuals do everything from soup to nuts, while others have a team approach,” Birk explains.
Using data technology, an IRB can assess on any particular week whether the current process works best. For example, an IRB might identify a particular staff member whose timeliness has slowed during a week. The person appears to be overloaded, Birk says.
With this real-time data, the IRB can shift some work to another member of the IRB staff and expedite that week’s review process, she adds.
“You can watch this on a daily basis as work comes in, and you can assess and balance by staff expertise, turnaround time, and workload,” she says. “It gives you more flexibility in the office.”
With the right data markers, it’s possible to identify these kind of work logjams and quickly correct them.
“For this to work, all IRB work would need to be assigned by individual with an automatic time stamp associated with the work,” Birk says. “This is so you can measure when the clock starts and the volume.”
And the data warehouse would need to be refreshed nightly so the IRB always is working off one-day-old data, she adds.
• Be detail-oriented when selecting markers. An IRB could think of the data markers as being in two different groups. The first group entails the research institution’s basic story — the transparent information it wants the world to see — and these would be collected and produced in canned reports, Birk explains.
“What do you want to tell the world about what we do?” she says.
These reports could include descriptions of each research entity’s portfolio of research. Detailed data in each report would describe how many studies went to the full board, the review turnaround time, and expedited review, Birk says.
“Those are the standard metrics that tell the story,” she says. “If an institutional official wants to know information, or if your institution belongs to a consortium that wants to know what they look like, these are the metrics.”
The second group includes data markers selected by the IRB to improve its own processes and quality. “You determine what’s in your data warehouse based on what questions you want answered,” Birk says.
“Right now, we have a project to look at streamlining one of our review models within the office, and we’re breaking that down into each of its component steps,” Birk explains. “We’re starting with the submission of the project to the office and how it is assigned to a reviewer and the workflow associated with that.”
The goal is to reduce the turnaround time, so the tool includes markers that will help answer that question, she adds.
“We actually asked to have the data warehouse refreshed recently to get some historical and baseline data so we could make a change to the workflow and have a benchmark for it,” Birk says.
The markers are very specific to any question the IRB wants answered and typically requires multiple steps. For example, an institution might have a department that has high volume research. The IRB would want to know answers to these questions:
- When do they submit studies and are there any seasonal or monthly trends?
- Why do they submit the most studies in a specific period of time?
- Are many of these studies from students or medical fellows?
“Maybe most studies are submitted in September, so the department needs more IRB reviewers at that time,” Birk says. “Or maybe they’re high volume all the time, and the IRB needs more faculty reviewers from the department to do the work.”
These are the kinds of assessments the IRB can do with the correct markers, she says.
• Check the operations dashboard to make daily or weekly tweaks and changes. Dashboards are visually appealing ways to describe data. They can use color theory and graphical concision to give the IRB a quick visual understanding of trends and patterns.1
They can use bar graphs, line graphs, and other graphics. For IRBMED, viewers, depending on their security level, can drill into any area of the chart and it would provide a consistent narrative. For example, one graphic shows the staff reviewer turnaround times broken down into five different teams and results for five consecutive years.1
“We’ve set up the dashboard to produce reports at a click of the button; we can download reports and set them up to be produced with a visual — a table, or graph, or chart,” Birk says. “Then you can access raw data behind that and it can be exported to Excel.”
The standard dashboard provides a look at the things an IRB wants to know in an ongoing and flexible way. It also can highlight outliers and give details that help uncover the root cause of a problem, such as a processing delay.1
Across time, IRBs can ask different questions and watch for long-term trends and workflow patterns. “Having a benchmark is very important; knowing where you’re coming from and then tracking it going forward,” Birk adds.
Reference
- Smith C, Ramani V, Birk J. Using visual and data technology to inform IRB operations. Presented at the PRIM&R Advancing Ethical Research Conference, held Dec. 5-7, 2014, in Baltimore, MD.
Research institutions increasingly are investing in electronic systems that have the capability of collecting and storing big data.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.