Welcome back to the third installment of our "Demystifying Generative AI" series from Radicle Insights.

Check out the first two installments here:

  1. The Basics and Business Implications of Generative AI
  2. Generative AI in Practice: How are businesses currently using generative AI?

In this edition, we dive into the essential components of implementing generative AI within your organization, while offering concrete examples to help elucidate these processes.The implementation of generative AI is a multidimensional task, requiring a blend of both human and technical resources. Let's delve into what these resources typically look like.

Human Resources

The first dimension of implementing generative AI involves assembling a capable and diverse team. The successful deployment of generative AI within an organization is not a solitary venture, but a collaborative effort of professionals from various disciplines. While the specific makeup of this team can vary depending on the organization's size, structure, and the specific AI project, there are key roles that are generally indispensable:

Data Scientists/AI Specialists: Skilled in machine learning, deep learning, and AI, these individuals develop and train the AI models. A company like OpenAI, for instance, employs a large team of AI specialists who work to refine their models and create applications like ChatGPT.

Domain Experts: Individuals with extensive knowledge in the area of application. They ensure that the AI model is accurately trained with domain-specific nuances. In healthcare, for example, medical professionals play a crucial role in guiding the training of AI that will be used for medical diagnostics or treatment plans.

Software Developers/Data Engineers: They handle the integration of the AI model with existing systems and manage the data pipelines.

Legal Professionals/Ethicists: To ensure that the use of AI aligns with current regulations and ethical standards, legal and ethics experts must be involved in the process.

Technical Resources

The second dimension of implementing generative AI involves harnessing the necessary technical resources. The ability of generative AI to create new content based on learned patterns is deeply rooted in its technical foundation, which primarily involves data and computing infrastructure:

Data: Generative AI relies heavily on data for training. For example, Morgan Stanley used a content library with hundreds of thousands of pages of knowledge and insights spanning investment strategies, market research and commentary, and analyst insights to help train an internal chatbot built on GPT-4.

Hardware/Software Infrastructure: Training AI models requires robust computing power, often achieved through GPUs. Cloud platforms such as Amazon Web Services or Google Cloud can provide scalable infrastructure for training and deploying these models. The software stack may include machine learning frameworks like TensorFlow or PyTorch.

What might the implementation process look like?

Define the Use Case: The process begins by identifying a business problem that generative AI can solve. Recalling the Morgan Stanley example above, the company wanted to improve the way knowledge was surfaced and delivered from advisors to clients.

Assemble the Team: As outlined above, gather the required expertise.

Data Collection: Gather the data that the AI model will learn from. This could be internal data, like customer transaction history, or external data, like social media feeds.

Model Training: Train the AI model on the collected data. This could take weeks to months depending on the complexity of the model and the quality and quantity of the data.

Deployment: After rigorous testing, the trained model is integrated into the business processes or products. For example, in the case of Morgan Stanley, a GPT-4 model was embedded into internal-facing applications for advisors and analysts to interact with.

Monitoring and Maintenance: Once deployed, it's crucial to continuously monitor the model's performance and update it to ensure optimal results. This step is ongoing as long as the model is in use.

What is the typical timeline?

The timeline for implementing generative AI varies significantly based on the complexity of the project, ranging from a few months to more than a year. This can depend on the level of training required for your generative AI model, as well as the complexity of the use case, among other variables. It's a dynamic process requiring continuous monitoring and updating. That being said, as more tools and base models hit the market, the timeline to deploying game-changing AI within organizations is shrinking rapidly.

Radicle is currently engaged on a number of projects helping customers like a large brewer, a large spirits company, a large technology company, and an insurance company understand the opportunities, implications, and risks of Generative AI.

In all cases, we're mapping the generative AI startups and approaches relevant to their needs (e.g. their value chain, a specific workflow, their marketing organization) and using that map to create custom networks of startup and technology leaders who provide unique, expert perspectives on the opportunities, implications, key risks, and more.

If you’re interested in and intrigued by Generative AI and Radicle’s expert-led approach, we’d love to share some insights over a 15-20 minute chat, which you can schedule some time here.