Vivek Desai, Chief Technology Officer, North America at RLDatix – Interview Series


Vivek Desai is the Chief Technology Officer of North America at RLDatix, a connected healthcare operations software and services company. RLDatix is on a mission to change healthcare. They help organizations drive safer, more efficient care by providing governance, risk and compliance tools that drive overall improvement and safety.

What initially attracted you to computer science and cybersecurity?

I was drawn to the complexities of what computer science and cybersecurity are trying to solve – there is always an emerging challenge to explore. A great example of this is when the cloud first started gaining traction. It held great promise, but also raised some questions around workload security. It was very clear early on that traditional methods were a stopgap, and that organizations across the board would need to develop new processes to effectively secure workloads in the cloud. Navigating these new methods was a particularly exciting journey for me and a lot of others working in this field. It’s a dynamic and evolving industry, so each day brings something new and exciting.

Could you share some of the current responsibilities that you have as CTO of RLDatix?  

Currently, I’m focused on leading our data strategy and finding ways to create synergies between our products and the data they hold, to better understand trends. Many of our products house similar types of data, so my job is to find ways to break those silos down and make it easier for our customers, both hospitals and health systems, to access the data. With this, I’m also working on our global artificial intelligence (AI) strategy to inform this data access and utilization across the ecosystem.

Staying current on emerging trends in various industries is another crucial aspect of my role, to ensure we are heading in the right strategic direction. I’m currently keeping a close eye on large language models (LLMs). As a company, we are working to find ways to integrate LLMs into our technology, to empower and enhance humans, specifically healthcare providers, reduce their cognitive load and enable them to focus on taking care of patients.

In your LinkedIn blog post titled “A Reflection on My 1st Year as a CTO,” you wrote, “CTOs don’t work alone. They’re part of a team.” Could you elaborate on some of the challenges you’ve faced and how you’ve tackled delegation and teamwork on projects that are inherently technically challenging?

The role of a CTO has fundamentally changed over the last decade. Gone are the days of working in a server room. Now, the job is much more collaborative. Together, across business units, we align on organizational priorities and turn those aspirations into technical requirements that drive us forward. Hospitals and health systems currently navigate so many daily challenges, from workforce management to financial constraints, and the adoption of new technology may not always be a top priority. Our biggest goal is to showcase how technology can help mitigate these challenges, rather than add to them, and the overall value it brings to their business, employees and patients at large. This effort cannot be done alone or even within my team, so the collaboration spans across multidisciplinary units to develop a cohesive strategy that will showcase that value, whether that stems from giving customers access to unlocked data insights or activating processes they are currently unable to perform.

What is the role of artificial intelligence in the future of connected healthcare operations?

As integrated data becomes more available with AI, it can be utilized to connect disparate systems and improve safety and accuracy across the continuum of care. This concept of connected healthcare operations is a category we’re focused on at RLDatix as it unlocks actionable data and insights for healthcare decision makers – and AI is integral to making that a reality.

A non-negotiable aspect of this integration is ensuring that the data usage is secure and compliant, and risks are understood. We are the market leader in policy, risk and safety, which means we have an ample amount of data to train foundational LLMs with more accuracy and reliability. To achieve true connected healthcare operations, the first step is merging the disparate solutions, and the second is extracting data and normalizing it across those solutions. Hospitals will benefit greatly from a group of interconnected solutions that can combine data sets and provide actionable value to users, rather than maintaining separate data sets from individual point solutions.

In a recent keynote, Chief Product Officer Barbara Staruk shared how RLDatix is leveraging generative AI and large language models to streamline and automate patient safety incident reporting. Could you elaborate on how this works?

This is a really significant initiative for RLDatix and a great example of how we’re maximizing the potential of LLMs. When hospitals and health systems complete incident reports, there are currently three standard formats for determining the level of harm indicated in the report: the Agency for Healthcare Research and Quality’s Common Formats, the National Coordinating Council for Medication Error Reporting and Prevention and the Healthcare Performance Improvement (HPI) Safety Event Classification (SEC). Right now, we can easily train a LLM to read through text in an incident report. If a patient passes away, for example, the LLM can seamlessly pick out that information. The challenge, however, lies in training the LLM to determine context and distinguish between more complex categories, such as severe permanent harm, a taxonomy included in the HPI SEC for example, versus severe temporary harm. If the person reporting does not include enough context, the LLM won’t be able to determine the appropriate category level of harm for that particular patient safety incident.

RLDatix is aiming to implement a simpler taxonomy, globally, across our portfolio, with concrete categories that can be easily distinguished by the LLM. Over time, users will be able to simply write what occurred and the LLM will handle it from there by extracting all the important information and prepopulating incident forms. Not only is this a significant time-saver for an already-strained workforce, but as the model becomes even more advanced, we’ll also be able to identify critical trends that will enable healthcare organizations to make safer decisions across the board.

What are some other ways that RLDatix has begun to incorporate LLMs into its operations?

Another way we’re leveraging LLMs internally is to streamline the credentialing process. Each provider’s credentials are formatted differently and contain unique information. To put it into perspective, think of how everyone’s resume looks different – from fonts, to work experience, to education and overall formatting. Credentialing is similar. Where did the provider attend college? What’s their certification? What articles are they published in? Every healthcare professional is going to provide that information in their own way.

At RLDatix, LLMs enable us to read through these credentials and extract all that data into a standardized format so that those working in data entry don’t have to search extensively for it, enabling them to spend less time on the administrative component and focus their time on meaningful tasks that add value.

Cybersecurity has always been challenging, especially with the shift to cloud-based technologies, could you discuss some of these challenges?

Cybersecurity is challenging, which is why it’s important to work with the right partner. Ensuring LLMs remain secure and compliant is the most important consideration when leveraging this technology. If your organization doesn’t have the dedicated staff in-house to do this, it can be incredibly challenging and time-consuming. This is why we work with Amazon Web Services (AWS) on most of our cybersecurity initiatives. AWS helps us instill security and compliance as core principles within our technology so that RLDatix can focus on what we really do well – which is building great products for our customers in all our respective verticals.

What are some of the new security threats that you have seen with the recent rapid adoption of LLMs?

From an RLDatix perspective, there are several considerations we’re working through as we’re developing and training LLMs. An important focus for us is mitigating bias and unfairness. LLMs are only as good as the data they are trained on. Factors such as gender, race and other demographics can include many inherent biases because the dataset itself is biased. For example, think of how the southeastern United States uses the word “y’all” in everyday language. This is a unique language bias inherent to a specific patient population that researchers must consider when training the LLM to accurately distinguish language nuances compared to other regions. These types of biases must be dealt with at scale when it comes to leveraging LLMS within healthcare, as training a model within one patient population does not necessarily mean that model will work in another.

Maintaining security, transparency and accountability are also big focus points for our organization, as well as mitigating any opportunities for hallucinations and misinformation. Ensuring that we’re actively addressing any privacy concerns, that we understand how a model reached a certain answer and that we have a secure development cycle in place are all important components of effective implementation and maintenance.

What are some other machine learning algorithms that are used at RLDatix?

Using machine learning (ML) to uncover critical scheduling insights has been an interesting use case for our organization. In the UK specifically, we’ve been exploring how to leverage ML to better understand how rostering, or the scheduling of nurses and doctors, occurs. RLDatix has access to a massive amount of scheduling data from the past decade, but what can we do with all of that information? That’s where ML comes in. We’re utilizing an ML model to analyze that historical data and provide insight into how a staffing situation may look two weeks from now, in a specific hospital or a certain region.

That specific use case is a very achievable ML model, but we’re pushing the needle even further by connecting it to real-life events. For example, what if we looked at every soccer schedule within the area? We know firsthand that sporting events typically lead to more injuries and that a local hospital will likely have more inpatients on the day of an event compared to a typical day. We’re working with AWS and other partners to explore what public data sets we can seed to make scheduling even more streamlined. We already have data that suggests we’re going to see an uptick of patients around major sporting events or even inclement weather, but the ML model can take it a step further by taking that data and identifying critical trends that will help ensure hospitals are adequately staffed, ultimately reducing the strain on our workforce and taking our industry a step further in achieving safer care for all.

Thank you for the great interview, readers who wish to learn more should visit RLDatix.

Latest articles

spot_imgspot_img

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

spot_imgspot_img