At the recent 2023 HLTH conference, Munjal Shah, CEO of HippocraticAI, shared his vision for using artificial intelligence to help alleviate widespread healthcare staffing shortages. Speaking on a panel titled “There’s No ‘AI’ in Team,” Shah argued that while diagnostic applications of AI remain risky, natural language processing holds promise for “super-staffing” nondiagnostic roles.
The annual HLTH event in Las Vegas brings together leaders in healthcare innovation. This year, much discussion centered on responsible applications of generative AI. Shah contends staffing shortages present an ideal starting point. Healthcare has an estimated global deficit of 10 million workers by 2030. Systems are already strained, limiting access and quality of care. Shah believes responsible use of natural language processing can help fill gaps in nursing, administration, and other nondiagnostic services.
Hippocratic AI, Shah’s startup, trains large language models (LLMs) to converse with patients. For instance, virtual assistants could provide chronic care support, explain bills and benefits, offer genetic counseling, or deliver test results. Shah stresses close collaboration with health systems in developing safe, helpful applications. Extensive training and feedback from human experts help ensure LLMs give sound, empathetic responses.
The HLTH panel concurred that AI alone cannot solve healthcare’s systemic problems. However, targeted applications combined with human oversight show promise. LLMs can interact conversationally at a massive scale for a fraction of human labor costs. As Munjal Shah described, AI “super-staffing” could provide services impossible for humans alone. For instance, post-discharge check-ins for all chronic disease patients. This improves access and frees human providers to practice at the top of their licenses.
Recent research indicates patients may prefer LLM responses over some physician communications. LLMs excel at conversational tone and empathetic listening. Shah believes generative AI is ideal for the human side of medicine – educating, counseling, and encouraging patients.
However, LLMs still require extensive safeguards. Hippocratic AI uses a safety governance council and reinforcement learning from human feedback. This helps correct potential mistakes and build trust. Despite caveats, Shah sees responsible AI augmentation as a way to democratize quality care. He stated, “You can’t call every patient after starting new medications. But at this cost structure, maybe you can.”
In closing, Munjal Shah advocates narrowly applied AI to alleviate healthcare’s staffing crisis. LLMs show promise for automating repetitive, nondiagnostic tasks on a massive scale. This “super-staffing” can free human providers to practice at their full potential. It also expands access to underserved populations. However, developing safe applications requires close collaboration between technologists and medical experts. Shah believes responsible AI augmentation is essential to democratizing healthcare in the 21st century.