Beyond data structures: How AI/LLMs can transform digital products
- Aldo R.
- May 7
- 8 min read
Updated: May 10

Traditional digital products have long been constrained by their reliance on structured data. Users fill out forms with predefined fields, and the system returns information in equally rigid formats. Since the early days of computing, digital products have relied on complex relational databases like SQL or hierarchical data structures like JSON — formats that machines can understand and manage easily, but that force human thinking into predefined categories and relationships. Throughout my career, I can clearly remember times when database structures dictated the user experience and the challenges for designers to comply with data entry fields and workflows to satisfy backend data needs, and not human needs.
But with the rise of large language models (LLMs), we're seeing a fundamental shift in how digital products can operate. For the first time, computers can work with natural language — the unstructured, messy, contextual way that humans naturally communicate. This capability is transforming digital products by breaking free from structured data constraints and creating interfaces that adapt to human needs rather than forcing humans to adapt to technological limitations.
Moving Beyond Form Fields
When working with digital products, users must translate their needs into the product's language, creating significant friction. Every interaction becomes an exercise in constraint—decomposing natural thoughts into unfamiliar workflows and discrete data points while learning each product's unique "language" of forms, dropdowns, and checkboxes. Consider how many clicks it takes to book a flight, or the mental mapping required when your goals don't perfectly align with available screens and options. If I ask you to pivot a table in Excel, maintain a table of contents in Word, or retrieve emails from multiple people related to one project, you'll recognize this frustration of adapting to rigid, structured interfaces.
LLMs fundamentally invert this relationship. Instead of filling out multiple form fields, users can simply state what they want: "I need a flight from San Francisco to Boston next Thursday, returning Sunday, preferably in the morning." The AI can parse this request, extract the relevant information, and perform the necessary actions.
This shift doesn't just save time — it changes the nature of human-computer interaction. The burden of understanding shifts from the user to the system, creating more intuitive and accessible digital experiences.
Transforming Complex Workflows
Many digital products require users to follow multi-step processes that mirror the underlying data structures rather than human thought patterns. Think about creating a financial report, setting up automation rules, or configuring complex software. These workflows often involve numerous screens, options, and decision points — each representing a node in a rigid decision tree that the product's developers envisioned.
This approach creates several pain points. Users must memorize or rediscover complex sequences of actions with each use.
With LLMs, complex workflows can be simplified through natural conversation. A user might say, "Create a monthly report showing sales by region, comparing this year to last, with a focus on growth markets." The AI can understand this request holistically, gathering the necessary data, applying the appropriate analysis, and generating the report—all without forcing the user through a series of form screens.
This transformation allows for emergent workflows that adapt to the user's specific needs rather than following predetermined paths. The system can handle ambiguity, infer reasonable defaults, and clarify only when necessary.
Products using LLMs can handle ambiguity, infer from context, and move forward, only clarifying when necessary.
Perhaps most importantly, LLMs enable non-technical users to accomplish complex tasks that previously required specialized knowledge. By describing desired outcomes rather than specifying exact processes, users can harness the full power of sophisticated digital products without learning their inner workings.
Users can focus on outcomes rather than processes, being concerned with "what" they want to accomplish rather than "how" to navigate the system.
Unlocking Unstructured Data
Traditional systems struggle with unstructured data like text documents, images, emails, and audio files. These rich information sources don't neatly fit into tables with predefined columns or objects with standardized properties. As a result, they typically require extensive tagging, categorization, and structured metadata to be useful.
This limitation has created significant blind spots in our digital tools. The vast majority of human knowledge and communication exists in unstructured formats, yet our products have been able to work effectively with only the small fraction that's been manually structured. Organizations are sitting on mountains of valuable information trapped in documents, communications, and media files that their systems can't meaningfully process.
In my experience with the Reagan-Udall Foundation for the Food and Drug Administration (FDA) — a non-profit organization created by Congress to support the FDA's mission — we studied FDA publicly available information that the private sector could use to develop digital products that amplify the FDA's public communication. We discovered that many of the FDA's publicly available datasets contained unstructured data in the form of recommendation letters, among other documents. AI would be the perfect tool to capture insights from these unstructured and valuable data sources that were previously difficult to analyze at scale.
LLMs excel at processing precisely this kind of unstructured data. They can analyze documents, extract key information, generate summaries, identify patterns, and connect related concepts—all without requiring rigid classification systems. Their ability to understand context and semantics allows them to derive meaning from information in ways that traditional systems simply cannot.
By bridging the gap between unstructured and structured data, LLMs are dramatically expanding the range of information that digital products can effectively utilize, unlocking value from previously inaccessible resources.
Real-World Applications
The transformation from structured to unstructured data interactions is already reshaping digital products across industries:
Mobile Applications: One of our projects at Copotential involved enhancing a goal-management consumer Apple mobile app by replacing structured data inputs with AI-driven conversation. By integrating LLMs, we dramatically improved customer satisfaction while capturing richer data with intent, context, and nuance. Most of the app's user experience, which our users were familiar with, stayed the same. The addition of AI created customer value by balancing innovation with familiarity in our integration approach.
Customer Support: During my time at Autodesk, I led the mobile-first Universal Help product, serving 26 million visitors. As this predated the AI revolution, we relied on complex decision trees to provide contextual self-help that users had to navigate step by step. LLMs would have transformed this experience, allowing customers to simply describe their problems in natural language and receive immediate, relevant assistance.
Content Creation: Users can describe what they want to create, and the AI can generate or assist with producing it. Marketing teams can generate initial drafts of campaigns by describing their goals and audience. Designers can rapidly prototype by describing layouts rather than manually arranging elements.
Data Analysis: LLM-powered analytics allows users to ask questions about their data in plain language: "How did our Q2 performance compare to last year, broken down by product category?" The system translates these natural questions into the necessary data operations and presents insights in accessible formats.
Product Discovery: LLM-enabled discovery allows users to describe what they're looking for in their own words: "I need a lightweight jacket for hiking in unpredictable weather." The system can understand these complex requirements and surface relevant options, even when the user's needs span multiple traditional categories.
Knowledge Management: LLM-powered knowledge platforms can unify disparate information sources, understanding the content and relationships between information regardless of format or location. Users can ask questions in natural language and receive answers drawn from across the organization's collective knowledge.
Workflow Automation: LLM-based automation allows users to describe the workflows they want in plain language: "When we get new customer feedback with negative sentiment, summarize it and alert the product team." The system can understand these instructions and implement the appropriate automation.
Implementation Considerations
While the benefits of integrating LLMs into digital products are significant, successful implementation requires thoughtful design:
Balancing Structure and Freedom: The most effective implementations combine the flexibility of LLMs with appropriate guardrails. This might mean offering structured components alongside natural language options or using the LLM to convert natural language into structured data behind the scenes.
LLM Adaptation Strategies: Adapting pre-trained LLMs to specific domains or tasks involves important cost-benefit tradeoffs. Options range from lightweight approaches like prompt engineering and instruction injection to more resource-intensive methods such as fine-tuning, LoRA (Low-Rank Adaptation), and specialized adapters. Retrieval-Augmented Generation (RAG) can enhance model capabilities by connecting LLMs to external knowledge sources without expensive retraining. The right approach depends on the specificity of your domain, available resources, and performance requirements.
Operational Costs: LLM inference costs are directly tied to usage patterns and token consumption. Each user interaction consumes tokens (the units of text processing), with costs scaling based on input length, output generation, and API call frequency. Implementation decisions around caching common responses, optimizing prompt length, and managing conversation context can significantly impact operational expenses as usage scales.
Pricing Strategy: Products leveraging LLMs require careful pricing models that account for variable AI costs. Options include usage-based tiers (free tier with token limits, premium tiers with higher allowances), feature-based segmentation (basic features using minimal AI, premium features with more extensive AI capabilities), or value-based pricing tied to specific outcomes the AI enables. The pricing strategy must balance making the product accessible while ensuring sustainable margins as AI usage scales.
Transparency and Control: Users should understand when they're interacting with AI, how their inputs are being processed, and what capabilities and limitations exist. Users need appropriate control mechanisms to correct misinterpretations or refine outcomes.
Regulatory Compliance and Privacy: Implementing LLMs requires careful attention to data privacy regulations like GDPR, CCPA, and similar frameworks worldwide. Organizations must consider how user inputs are processed, stored, and potentially used for model improvement. Key considerations include obtaining appropriate consent for data processing, implementing data minimization practices, ensuring user rights to access or delete their data, and maintaining transparency about how AI systems use personal information.
Fallback Mechanisms: Robust implementations need graceful fallback options—whether that's reverting to more structured interfaces when needed, providing clear ways for users to clarify their intent, or seamlessly connecting to human assistance.
Accuracy and Trust: Product designers must implement appropriate safeguards like fact-checking critical outputs against verified data sources, clearly distinguishing between retrieved information and generated content, and establishing confidence thresholds for different types of tasks.
Multimodal Integration: The most advanced implementations integrate LLMs with other modalities, allowing users to communicate through their preferred channels. This might mean combining text input with image recognition, voice interaction, or graphical interfaces.
Continuous Improvement: LLM-powered features should improve based on actual usage patterns. This requires thoughtful instrumentation to identify where the system succeeds or fails, along with mechanisms to incorporate feedback.
Ethical Use and Bias Mitigation: Product teams must proactively identify potential biases and implement guardrails to prevent harmful outputs. This includes regular testing across diverse user scenarios, implementing fairness metrics, and creating inclusive design processes.
Conclusion
The shift from structured to unstructured data interactions represents one of the most significant transformations in digital product design since the graphical user interface. By enabling computers to understand and generate natural language, LLMs are creating digital products that work with information the way humans do.
The implications extend far beyond convenience. By removing the technical barriers of structured inputs and outputs, LLMs are democratizing access to powerful digital capabilities. Tasks that once required specialized knowledge can now be accomplished through natural conversation. This has the potential to make sophisticated technology accessible to broader and more diverse user bases.
For product designers and developers, this transformation presents both exciting opportunities and new responsibilities. Creating effective experiences with LLMs requires rethinking fundamental assumptions about user interaction, information architecture, and interface design.
The most successful digital products will likely be those that thoughtfully blend the best aspects of structured and unstructured approaches—providing the flexibility of conversation while maintaining the reliability of structured data where it matters most. The goal isn't to eliminate structure entirely, but to make it invisible to users, handling the necessary transformations behind the scenes while presenting a natural, human-centered interface.
By allowing users to engage with technology on human terms rather than technological ones, AI LLMs are making digital products more accessible, powerful, and aligned with how people naturally think and communicate.

Aldo Raicich
Principal Product Consultant
Aldo Raicich is a digital product strategist and innovation leader with over 20 years of experience transforming digital experiences across various industries. As the Principal at Copotential, he helps organizations reimagine their digital products by integrating cutting-edge technologies like AI and LLMs.
Let's connect to add Artificial Intelligence LLMs to your products and bring new value to your business and customers. Start here.