Trail Run illustration

Nowadays, companies want to be able to test business decisions and ideas at a scale large enough to believe the results but also at a scale small enough to reduce the large investments and risks that come with full-scale execution.

Trial Run helps conduct tests such as altering store layouts and remodeling, loyalty campaigns, and pricing to recommend the best possible tailored rollout to maximize gains. You can now implement new ideas with minimal risk and maximum insight with the power of business experimentation. Trial run helps you:

  • Test each business idea at scale to generate customer insights without excessive spending.
  • Find out why your customers behave the way they do.
  • Learn how your customers will react to your new big idea.

 

What is Trial Run?

Trial Run is a data-driven, cloud-based test management platform used to test business ideas for sites, customers, and markets. Trial run is built using Amazon EKS, Amazon Redshift, Amazon EC2, Amazon ElastiCache, and Amazon Beanstalk. It is intuitive for beginners and experts alike and helps companies scale experimentation efficiently and affordably.

Trial Run supports the entire experimentation lifecycle, which includes:

Trail Run illustration

 

 

  1. Design: Build a cost-effective and efficient experiment that gives you the data you need to proceed with confidence.
  2. Analyze: Work with variables that provide you with targeted and actionable insights.
  3. Act: Use the generated insights to ensure your new rollout provides your stakeholders with the precise ROI.

Trial Run offers valuable support across various operational and administrative departments, including Retail, Consumer Packaged Goods (CPG), and Telecommunications.

Through its scientific and methodical testing approach, Trial Run can uncover fresh perspectives and guide decision-making through a range of tests, including:

  • Marketing and merchandising strategies.
  • Enhancing the in-store experience.
  • Examining store operations and processes.

These tests are carried out at the store operations and process, product, or consumer levels.

Trial Run offers a dynamic, affordable, and modern way of experimentation so you can stay relevant in a rapidly changing business environment. Trial Run also helps you to drive experiments through:

  • Driver Analysis: Identify key factors that are significant in driving the business outcomes
  • Rollout simulator: Maximize the ROI of a campaign
  • Synthetic Control Algorithm: Determine the right number of control stores with appropriate weights to create the replica of the test store
  • Experiment calendar: Avoid overlaps in experiments
  • Clean search: Let Trial Run parse the experiment repository and find entities that are available for a test

     

    What you can expect from Trial Run

    • Graphical design elements make it easy to use the program as an expert or a beginner
    • Automated workflows can guide you through the process from start to finish
    • Highly accurate synthetic control results with automated matching processes that only require minimal human intervention
    • Experiments at speed and scale without the hassle of expert teams or expensive bespoke solutions
    • Training, troubleshooting, and best practices from the best in the business
    • Easy pilots to help your new idea go live in as little as 6 to 8 weeks

    Trial Run stands out from other solutions by offering a transparent methodology and easily explainable recommendations. Trial Run utilizes a cutting-edge technique called “synthetic control” for matching, ensuring precise results. Trial Run can be used as a SaaS offering that is easily scalable based on demand and can be hosted on the cloud of customer’s choice. With Trial Run software, customers have unlimited test capabilities, enabling them to design and measure numerous initiatives without any restrictions. Finally, Trial Run success is proven in enterprises, with over 1,000 use cases deployed on our platform.

    How do I get started?

    Are you ready to implement cutting-edge technology to help you build cost-effective and efficient experiments that provide you with the data you need to make decisions?

    If you want to achieve successful Trial Run implementation, get started on the AWS Marketplace.

    Interested in learning more about how Fractal can help you implement Trial Run, contact us to get in touch with one of our experts.

    Top 7 Announcements from AWS re:Invent 2023

    Amazon Web Services recently concluded its highly anticipated re:Invent 2023 event, showcasing a resurgence of big community events in the tech industry after the pandemic-related hiatus. With a record-breaking attendance of around 50,000 participants, re:Invent 2023 was a milestone event with significant announcements and insights for the tech world. 

    Here’s a summary of top announcements, followed by the keynote from Adam Selipsky, AWS CEO, and in-depth announcements.  

    Generative AI was, unsurprisingly, the buzzword. AWS rolled out an exciting new AI chip, support for new foundation models, updates to its generative AI platform Amazon Bedrock, and support for vector databases and zero-ETL integrations. They also announced a new generative AI assistant dubbed Amazon Q (in reference to Star Trek’s “Q” character, seemingly). Q will probably have a broad set of use cases in enterprises. 

    1. Amazon Q: A new generative AI assistant

    A Gen AI-powered assistant designed to help get relevant answers, solve problems, and generate content using Retrieval Augmented Generation (RAG). It also integrates with other AWS services:  

    • AWS Console: Amazon Q simplifies exploring AWS’s framework, best practices, and documentation. It is accessible in the AWS management console. For example, “How to build a web application on AWS?” yields a list of services like AWS Amplify, AWS Lambda, and Amazon EC2, along with their advantages and resources. 
    • Connect: Cloud-based contact center service ensures scalable customer service operations. Amazon Q enhances customer interactions by understanding needs, minimizing wait times, and delivering real-time responses via API integration. 
    • Amazon QuickSight: Business intelligence service offering interactive dashboards, paginated reports, and embedded analytics. Amazon Q within QuickSight enhances productivity for business analysts and users by swiftly creating visuals, summarizing insights, answering data queries, and constructing data stories using natural language.  

    2. Expanded choice of models in Amazon Bedrock 

      Bedrock is the foundation model library from AWS. It enables the integration of various models that underpin generative AI. AWS have introduced some new models to Amazon Bedrock: 

      • Meta’s popular LLaMA-2  
      • Anthropic Claude 2.1: An update to Claude 2, it now offers a 200k token context window (vs. 128k for GPT 4 Turbo), reduced rates of hallucination, and improved accuracy over long documents. 
      • Amazon Titan Image Generator: Similarly to tools such as Mid-journey, Stable Diffusion, and Dall-E, Titan Image Generator lets you not only create but also improve images with natural language commands. Titan also supports enterprise needs for invisible watermarks on images. 
      • Amazon Titan Multimodal Embeddings: Improve searches by understanding images and text. For instance, a stock photo company could use it to find specific images based on descriptions or other images, enhancing accuracy and speed. 

      3. Four New Capabilities for AWS Supply Chain  

      An application that unifies data and provides ML-powered actionable insights. It incorporates embedded contextual collaboration and demand planning features while seamlessly integrating with your client’s current enterprise resource planning (ERP) and supply chain management systems. Announced three new capabilities: 

      • Supply planning (Preview): Plans purchases of raw materials, components, and finished goods. This capability considers economic factors, such as holding and liquidation costs.  
      • Visibility and sustainability (Preview): Extends visibility and insights beyond your client’s organization to your external trading partners. This visibility lets you align and confirm orders with suppliers, improving the accuracy of planning and execution processes.  
      • Amazon Q (Preview): As mentioned above, Amazon Q is now integrated with Supply Chain services to empower inventory managers and planners with intelligent insights and explore what-if scenarios. 

      4. SageMaker capabilities (Preview) 

        • HyperPod: accelerates model training by up to 40%, enabling parallel processing for improved performance.  
        • Inference: reduces deployment costs and latency by deploying multiple models to the same AWS instance.  
        • Clarify: supports responsible AI use by evaluating and comparing models based on chosen parameters.  
        • Canvas: enhancements facilitate seamless integration of generative AI into workflows. 

        5. Next-generation AWS-designed chips (Preview) 

          Amazon jumped into the custom cloud-optimized chip fray and introduced two new chips: 

          • Graviton4: A powerful and energy-efficient chip suitable for various cloud tasks.  
          • Trainium2: A high-performance computing chip, it helps accelerate model training while making it more cost-effective. 

          Notable AWS customers like Anthropic, Databricks, Datadog, Epic, Honeycomb, and SAP already use these chips. 

          6. Amazon S3 Express One Zone  

          A new purpose-built storage class for running applications that require extremely fast data access. 

          7. New integrations for a zero-ETL future  

          Aim to make data access and analysis faster and easier across data stores. Four new integrations are: 

          To learn more about all those exciting news, please check the AWS Re:Invent keynote blog.  

          Transform Customer Digital Experience with AIDE

          High digital abandonment rates are typical for brands across domains, driven mainly by the experiential issues site users face during their journey. Identifying these friction points can be burdensome for businesses as they need help digesting the new wealth of granular data generated along their customer’s digital journey.

          Fractal’s Automated Insights for Digital Innovation (AIDE) is a smart digital solution for a broad range of industries, including retail, finance, insurance, and more, that uses a customizable open-source AI that works well with complex journeys, data security, and time-to-market needs for multiple industries. It helps make smarter decisions at every step and enables businesses to resolve issues quickly and increase potential revenue and leads.

          AIDE helps:

          • Analyze millions of digital consumer touchpoints to provide insights to increase revenue, engagement, and growth for the business.
          • Identify the root cause of friction from call, chat, and website errors and use AI to parse out critical signals from all the unstructured data in the customer journey.
          • Get the most comprehensive insights into the digital journey to drive data-driven hypotheses for supporting A/B tests to drive website design changes to improve the consumer experience.

           

          What is AIDE?

          AIDE is a digital optimization platform that helps detect and contextualize the issues faced by visitors on digital channels. It acts as an intelligent digital solution for various industries, including retail, finance and insurance, telecommunications, tech, media, and more. AIDE uses customizable, open-source AI that works well with complex journeys, data security, and time-to-market needs for multiple industries. It’s an insight-orchestrated platform supported by natural language-generated insights. AIDE:

          • Selects the sales or service flow to influence particular focus points.
          • Identifies the selected data domains to create a journey sensor.
          • Helps detect the most important anomalies across key performance indicators.
          • Finds the friction point on the website using various journey sensors.
          • Helps analyze the customer voice to add context to insights.

          Leveraging the power of the AWS cloud, AIDE is built on Amazon RDS, Redshift, EMR, LaMDA, E2, S3 and can be deployed in your AWS environment.

           

          How can AIDE help my business?

          AIDE product architecture

          AIDE brings together data engineering, Natural Language Processing (NLP), machine learning, and UI capabilities to help clients:

          • Easily develop new data features from raw data to power downstream data analytics use cases.
          • Identify and locate precise points of friction on your company’s website, online events, or funnels.
          • Deep dive into the context of customer dissonance using the voice of customer analytics.
          • Prioritize the most critical areas based on value loss estimation.
          • Analyze Omni-channel customer journey analysis.
          • Provide user-friendly and intuitive UI for beginners and experts.
          • Provide root cause analysis of customer pain points/dissonance during the digital journey.

           

          What can I expect from AIDE?

          With AIDE, you can capture every in-page interaction and micro-gesture to understand the site user’s journey and identify frustrations and errors impacting conversion and self-serve rate.

          AIDE helps companies remove friction and errors that spoil the visitor’s experiences. It also helps leverage best-in-class AI/ML modules to identify struggle points and recommend changes to drive design improvements using multiple features such as:

          • Sensorize: An automated AI/ML pipeline derives meaningful business indicators using the click activity across customer journeys.
          • Detect: Deviations from expected behavior across digital journeys get captured by applying pattern recognition algorithms to the key digital indicators.
          • Locate: A suite of supervised machine learning algorithms identify drivers of key customer journey outcomes (drop-off, clear cart, etc.) and measure relative impact at a page and click level of a customer’s experience.
          • Reveal: NLP module performs sentiment analysis and entity extraction on the voice of customer data such as chat; feedback etc. to identify the root cause of the friction and generate actionable insights.
          • Prioritize: Quantify the insights with respect to loss in revenue or incremental overhead costs to prioritize hypotheses for improving website design.

          Overall, AIDE is adaptable and open source, making it a flexible solution for various needs and is effective at addressing the complex customer journeys of today’s digital world. It is secure and reliable, with a focus on data protection, and easy to use and deploy, with a quick time to market.

           

          How do I get started?

          AIDE is also available on AWS Marketplace. Contact us to learn how an AIDE solution can identify and reduce friction points, helping the business grow at scale.

          Scale AI model governance with the AI Factory framework

          AI and machine learning projects are on the rise. According to Gartner, 48% of CIOs and tech executives have deployed, or plan to deploy, an AI/ML project in 2022. It is also estimated that 50% of IT leaders will struggle to drive their AI initiatives from Proof of Concept (PoC) to production through 2023.

          Challenges moving AI/ML initiatives from PoC to production

          What causes the gap between PoC and AI/ML model implementation?

          IT and business leaders often cite challenges relating to security, privacy, integration, and data complexity as the key barriers to deploying AI/ML models in production. It is often due to governance frameworks not being shared across an organization to ensure compliance and maintainability – if a framework exists at all.

          “At some point, your proof-of-concept is likely to turn into an actual product, and then your governance efforts will be playing catch-up,” writes Mike Loukides in an O’Reilly report. “It is even more dangerous when you’re relying on AI applications in production. Without formalizing some kind of AI governance, you’re less likely to know when models are becoming stale, when results are biased, or when data has been collected improperly.”

          AI models require constant attention in production to achieve scalability, maintainability, and governance. To do that, organizations need a strong MLOps foundation.

          Leveraging MLOps at scale

          In one survey, Deloitte found that organizations that strongly followed an MLOps methodology were…

          • 3x more likely to achieve their goals
          • 4x more likely to feel prepared for AI-related risks
          • 3x more confident in their ability to ethically deploy AI initiatives

          AI Factory framework benefits

          Organizations following an MLOps methodology also gain a clear advantage in time to deployment. McKinsey found that companies without a formalized MLOps process often took 9 months to implement a model. In comparison, companies applying MLOps could deploy models in 2 to 12 weeks!

          The secret? By applying MLOps practices, these companies were able to create a “factory” approach for repeatable and scalable AI/ML model implementation. Their engineers weren’t building everything from scratch–they could pull from a library of reusable components, automate processes, and ensure compliance and governance throughout the organization.

          Luckily, you can also take this approach with our AI Factory Framework.

          Our AI Factory Framework

          The AI Factory Framework is a cloud-based MLOps framework that provides organizations with the foundation to deliver Data Science, Machine Learning, and AI projects at scale. It offers enterprise-level reusability, security, integration, and governance.

          Simply put, AI Factory helps customers scale MLOps, centralize governance, and accelerate time to deployment.

          Key benefits of the AI Factory

          By leveraging reusable and standardized artifacts, automated pipelines, and governance solutions, our AI Factory framework reduces duplicate effort and upskilling needs between teams and projects.

          AI Factory framework deliverablesAI Factory Framework benefits

          Customers leveraging the AI Factory Framework can take advantage of our AI engineering best practices to accelerate deployment and ensure model governance at scale.

          AI Factory also helps businesses:

          • Make the entire end-to-end lifecycle more repeatable, governable, safer, and faster
          • Shorten planning and development with accelerated time to deployment
          • Streamline operational, security, and governance processes
          • Reduce development risks & improve model quality
          • Reduce team’s upskilling needs
          • Achieve higher success rates & ROI

           

          Learn more

          Over the last decade, we have helped many customers build and execute their AI governance strategy. We distilled this experience and the derived best practices in this framework, to help deliver customers’ AI/ML initiatives at scale.

          Want to set up your own AI Factory Framework? Contact us to get in touch with one of our experts!

          Resources: