Vector Quality Sciences
VECTORQuality Sciences
RBQM Best Practices

5 Signs Your RBQM Platform Isn't Working (And What to Do About It)

8 min readLast updated: January 2025

You've invested hundreds of thousands of dollars in a commercial RBQM platform. Your vendor promised real-time risk detection, streamlined monitoring, and data-driven quality oversight. Six months later, your team is still manually pulling data into Excel, and your CRAs are ignoring the dashboards.

Sound familiar? You're not alone. Industry surveys show that 60% of sponsors struggle with RBQM adoption despite significant technology investments. The problem isn't the platform itself. It's the gap between what the technology can do and how your team actually uses it.

Here are 5 warning signs that your RBQM platform isn't delivering value, and practical steps to fix each one.

1

Your Team Is Still Using Excel for Risk Oversight

The Symptom: Manual workarounds despite automation

What This Looks Like

  • CRAs are exporting data from the RBQM platform into Excel to create their own tracking sheets
  • Data managers maintain separate reconciliation spreadsheets instead of using platform reports
  • Executive team requests PowerPoint summaries instead of viewing live dashboards

Why This Happens

Your team reverts to Excel because the platform doesn't match their workflow. Maybe the KRIs don't align with how your CRAs think about site performance. Maybe the dashboards show 50 metrics but your team only cares about 5. Or maybe the platform requires 10 clicks to get to the information they need, while Excel gives them instant access.

The platform isn't wrong. It's just not configured for how your team actually works.

How to Fix It

  • Observe actual workflows: Shadow your CRAs and data managers for a week. Watch what they actually do, not what the SOP says they should do.
  • Simplify dashboards: Remove 80% of the KRIs. Focus on the 5-7 metrics your team actually uses to make decisions.
  • Create role-based views: CRAs need site-level detail. Executives need portfolio-level summaries. Don't make everyone look at the same dashboard.
  • Automate the Excel outputs: If your team needs Excel, set up automated exports instead of forcing them to manually download data.
2

KRI Alerts Are Ignored or Dismissed as False Positives

The Symptom: Alert fatigue and threshold gaming

What This Looks Like

  • Email alerts go unread because "they're always wrong anyway"
  • Teams request threshold increases to reduce alerts instead of investigating root causes
  • Real issues are missed because the system "cried wolf" too many times

Why This Happens

KRI thresholds are often set using vendor defaults or industry benchmarks that don't reflect your specific trial design. A 20% screen failure rate might be normal for an oncology trial but alarming for a diabetes study. When thresholds don't account for context, you get noise instead of signal.

The other problem is lack of actionability. An alert that says "Site 101 has high query rate" isn't useful if it doesn't tell you which CRFs are causing the queries or what action to take.

How to Fix It

  • Tune thresholds to your data: Use the first 3 months of trial data to establish baseline performance, then set thresholds at 1.5-2x standard deviations from the mean.
  • Add drill-down context: Every alert should link to a detailed view showing what's driving the metric (e.g., which CRFs have the most queries).
  • Implement tiered alerting: Yellow = watch, Orange = investigate, Red = immediate action required. Not everything needs to trigger an email.
  • Track alert resolution: Measure how often alerts lead to action. If 90% of alerts are dismissed, your thresholds are too sensitive.
3

Data Quality Issues Aren't Caught Until Database Lock

The Symptom: Reactive firefighting instead of proactive prevention

What This Looks Like

  • Cross-system discrepancies (EDC vs. ePRO, EDC vs. Safety DB) discovered during database lock
  • Systematic data entry errors at specific sites go undetected for months
  • Database lock delayed by 4-6 weeks to resolve data quality issues

Why This Happens

Most RBQM platforms focus on operational KRIs (enrollment, queries, monitoring visits) but lack robust data quality reconciliation. They'll tell you how many queries are open, but they won't automatically flag that 8% of patients have mismatched enrollment dates between the EDC and IRT.

The other issue is timing. Data quality checks run during database lock are too late. By then, the site coordinator who made the error has moved to another job, and you're trying to reconstruct what happened six months ago.

How to Fix It

  • Implement automated reconciliation: Build scripts that compare key data points across systems daily (enrollment dates, AE counts, subject status).
  • Add data quality KRIs: Track cross-system discrepancy rates, missing data patterns, and out-of-range values as dedicated KRIs.
  • Run weekly reconciliation reports: Don't wait for database lock. Identify and resolve discrepancies while the data is still fresh.
  • Integrate with source systems: If your RBQM platform can't reconcile across systems, build custom R or Python scripts to do it.
4

Platform Training Happened Once, Then Never Again

The Symptom: Knowledge gaps and tribal knowledge

What This Looks Like

  • New team members learn the platform by asking colleagues instead of formal training
  • Only 2-3 "power users" know how to use advanced features; everyone else sticks to basics
  • Platform capabilities are underutilized because no one knows they exist

Why This Happens

Vendor training during implementation covers the basics: how to log in, where to find reports, how to export data. But it doesn't teach your team how to interpret KRIs, tune thresholds, or customize dashboards for their specific needs. Six months later, when a new CRA joins, there's no refresher training—just a PDF user manual that no one reads.

The result is a team that uses 20% of the platform's capabilities and doesn't know what they're missing.

How to Fix It

  • Create role-based training modules: CRAs need different training than data managers. Build 15-minute videos for each role showing real workflows.
  • Schedule quarterly refresher sessions: Dedicate 30 minutes per quarter to showcase underutilized features or new platform updates.
  • Build a knowledge base: Create a searchable internal wiki with FAQs, troubleshooting tips, and best practices specific to your organization.
  • Designate platform champions: Identify 2-3 power users per team who can provide peer support and gather feedback for platform improvements.
5

The Platform Generates Reports, But No One Acts on Them

The Symptom: Data theater without decision-making

What This Looks Like

  • Weekly risk reports are generated, emailed, and filed without discussion
  • RACT meetings review the same KRIs every week without taking corrective action
  • Platform shows clear risk signals, but monitoring visits continue on the original schedule

Why This Happens

This is the most insidious problem because it looks like RBQM is working. You have dashboards, reports, and RACT meetings. But if those insights don't change behavior—if high-risk sites still get the same monitoring frequency as low-risk sites—then you're doing RBQM theater, not RBQM.

The root cause is often organizational, not technical. Your monitoring plan was locked in the protocol, and changing it requires a protocol amendment. Or your CRO contract specifies fixed monitoring visits, so you can't reallocate resources based on risk. The platform works, but your processes don't.

How to Fix It

  • Build decision rules into SOPs: Define clear actions for each risk level (e.g., "Red KRI = unscheduled monitoring visit within 2 weeks").
  • Track action completion: Measure how often risk signals lead to action. If you identify 10 high-risk sites but only visit 2, your RBQM isn't working.
  • Negotiate flexible CRO contracts: Move from fixed monitoring visits to risk-based resource allocation. Your CRO should support this.
  • Close the loop: After taking action, track whether the KRI improves. If not, either the action was wrong or the KRI doesn't measure what you think it does.

The Bottom Line

RBQM platforms are powerful tools, but they don't work on autopilot. If your team is still using Excel, ignoring alerts, catching data quality issues too late, lacking training, or generating reports without action, your platform isn't the problem. Your adoption strategy is.

The good news? All five of these problems are fixable. It takes intentional effort to align the platform with your workflows, tune KRIs to your data, build ongoing training, and embed decision-making into your processes. But when you do, RBQM transforms from a compliance checkbox into a genuine competitive advantage.

Need Help Fixing Your RBQM Platform?

I specialize in RBQM platform enablement—diagnosing adoption gaps, reconfiguring platforms to match workflows, and training teams to use the technology effectively. If any of these 5 signs sound familiar, let's talk.

We value your privacy

We use cookies to enhance your browsing experience, serve personalized content, and analyze our traffic. By clicking "Accept", you consent to our use of cookies. Read our Privacy Policy.