https://store-images.s-microsoft.com/image/apps.60841.78607c41-5f65-474f-ad70-b1ea8e814ffe.f50e24dc-0c58-4db1-9604-3cfd6e557ee5.e62822fc-3d77-4362-bba2-16e052405364

WalkmeX Bring your own LLM

WalkMe, Inc.

WalkmeX Bring your own LLM

WalkMe, Inc.

Walkme OpenAI Deployment for On Prem Customer Demo

The central goal of this project is to accommodate customers' desires to incorporate their proprietary OpenAI-based Language Learning Models (LLM) with our WalkMe products in their environment.

To make this possible, we've engineered a specialized setup that is intended for deployment within customers' unique operational environments.

This setup serves as a bridge between their LLM and our WalkMe products, thereby enhancing overall functionality and the user experience.

Our setup is designed with flexibility in mind, offering one distinct method of installation to cater to a variety of customer needs.

The method is designed for customers who already have a standard pattern for managing self-hosted deployments.

These customers can utilize their dedicated subnet to host our Azure resources, allowing them to integrate our services into their existing infrastructure.

This method ensures minimal disruption to their current operations while adding to the value of our WalkMe products.

However, the successful deployment of this setup hinges on the fulfillment of certain prerequisites by the customers.

These include the establishment of a Site-to-Site (S2S) VPN connection, which ensures secure data transmission between their local network and the Azure VNet. Additionally, peering must be enabled between a provided subnet and the OpenAI endpoints, which facilitates efficient communication between the two networks.

In essence, this project presents a versatile solution that allows customers to leverage their own LLMs with our WalkMe products, providing an enhanced user experience while maintaining the flexibility to cater to both new and existing infrastructures.