BLOG

Back to Blog

How to adopt new insurance technology without breaking operations

February 12, 2026 | Insurance data decisioning, Insurance process automation, Insurance Underwriting

Insurance organizations are under constant pressure to modernize. New tools promise efficiency, better decisions, and faster turnaround times. But for many carriers and brokers, technology initiatives still fail to deliver meaningful impact. Adoption stalls. Workarounds persist. Teams revert to legacy processes.

In this replay episode of Unstructured Unlocked, Tami Pantzke shares a grounded, experience-driven view of how insurers should evaluate and implement new technology. Drawing on more than 30 years in reinsurance and insurance operations, Tami explains why success depends less on the tool itself and more on how risk, people, and process are handled from the start.

Listen to the full podcast here. >>>

Technology decisions start with growth plans, not tools

According to Pantzke, technology conversations rarely begin with innovation for its own sake. They start with growth plans. Five-year forecasts, new client strategies, or M&A activity force leaders to ask a basic question: can current operations scale without breaking?

That question drives everything that follows. Teams evaluate whether existing systems can handle increased volume, whether enhancements are viable, or whether a new platform is required. Crucially, this evaluation must consider cost, risk, and operational margin at the same time. New technology cannot outpace growth potential or introduce unnecessary complexity.

Pantzke outlined a disciplined approach that many insurers skip when under time pressure. Compare the status quo, a modified legacy system, and new vendors side by side. Apply the 80/20 rule. If a solution cannot support the majority of core use cases, the remaining 20% often becomes the highest-risk, most manual part of the operation. That is where errors, rework, and compliance exposure tend to surface.

The takeaway is clear. Technology decisions are business decisions first. Without a clear understanding of growth drivers and operational constraints, even well-intentioned investments can create friction instead of capacity.

Change management determines whether technology actually sticks

One of the strongest themes in the conversation is that implementation failure is rarely technical. It is organizational. Pantzke emphasized that most teams underestimate how disruptive change feels on the ground, especially in highly regulated, document-heavy insurance environments.

Effective change management starts before a tool goes live. Leaders must understand current workflows, map processes end to end, and assess the skills and capacity of the people involved. New technology often shifts where work happens and who performs it. That has real implications for training, staffing, and morale.

Pantzke described the importance of involving the right people early, including experienced SMEs and frontline users, not just enthusiastic early adopters. Pilots matter. Dual-entry periods reveal where friction exists. Metrics such as time per task, error rates, and proficiency gains provide real signals about whether adoption will scale.

When teams are excluded or feedback is ignored, resistance shows up later in the form of workarounds, partial usage, or outright abandonment. In Pantzke’s experience, failed implementations almost always shared the same warning signs: limited stakeholder engagement, unclear problem statements, and assumptions that users would simply adapt.

Faster pilots still require disciplined evaluation

As insurers experiment with AI-driven tools and smaller, more targeted initiatives, implementations are moving faster. But Pantzke cautioned against confusing speed with rigor. Even limited-scope pilots require clear ownership, defined success criteria, and alignment with existing systems and priorities.

She shared examples where teams enthusiastically adopted new tools, showcased them to clients, and then discovered months later that usage was near zero. In many cases, the technology worked as promised. The failure was strategic. Overlap with existing systems was ignored. Long-term scalability was not considered. No one owned adoption beyond the initial rollout.

The lesson applies directly to today’s AI landscape. Proofs of concept can validate functionality, but they do not guarantee operational value. Without asking where the tool fits, how it integrates, and who is accountable for outcomes, insurers risk accumulating disconnected solutions that quietly disappear.

Related content: How insurers are rethinking technology across underwriting, claims, and servicing

The real risk is ignoring the front end of operations

Across every example Pantzke shared, one pattern stood out. Technology creates value only when it reduces friction at the front end of the workflow. When intake, handoffs, and controls are poorly designed, downstream systems inherit inconsistency and risk.

Successful teams focus on controls, data quality, and clear ownership early in the process. They design implementations that surface issues, stop errors before they propagate, and provide visibility into how decisions are made. In regulated environments, traceability is not optional. It is foundational.

The takeaway from this replay episode is simple but often overlooked. Modernization is not about chasing new tools. It is about making deliberate, well-governed decisions that align technology with how insurance work actually gets done.

When insurers start with operational reality, involve the right people, and manage change with intention, new technology becomes a capacity multiplier instead of another source of friction.

Subscribe to the Unstructured Unlocked podcast to get the latest episodes on your favorite platforms, including: 

Apple Podcasts

Spotify

Amazon Music

Ask Indico

Ask Indico

We help carriers make faster, smarter decisions across underwriting and claims — ask me how.