While the challenges organizations face often fall into common categories, the solutions must be tailored to each company’s unique needs.
Choosing the Right Path
The first step is deciding how your company will integrate these new tools. There are three main options: pre-built tools, custom models with your own data, and building your own large language models (LLMs).
Here are some key factors to consider when making this choice:
- Resources and budget: Pre-built tools are the most cost-effective option but offer less control. Integrating models with your data requires investment in infrastructure and talent. Building LLMs from scratch is the most expensive option, requiring significant resources and cutting-edge expertise.
- Specific needs and use cases: If you only need Generative AI for basic tasks, pre-built tools might suffice. However, if you require highly specialized AI for your core products or services, building custom solutions will provide a greater long-term advantage.
- Data ownership and regulations: In some industries, regulations or data privacy concerns might necessitate integrating models with your data or building solutions in-house.
- Long-term AI strategy: If AI is simply another tool in your toolbox, pre-built solutions might work. But to gain a competitive advantage through AI, you’ll need to develop unique in-house capabilities.
For example, FinanceCorp initially used pre-built Generative AI tools for tasks like writing and summarizing reports. However, these tools proved inadequate for complex financial tasks like risk analysis and contract reviews. To achieve the performance they needed, they had to switch to a custom model solution with their own data.
Taming the Generative AI Beast
One key lesson learned from pilot projects is the importance of avoiding a sprawl of platforms and tools. A recent McKinsey survey found that “too many platforms” was a major obstacle for companies trying to implement Generative AI at scale. The more complex the infrastructure, the higher the cost and difficulty of managing large-scale deployments. To achieve scale, companies need a manageable set of tools and infrastructure.
One solution is to establish a centralized, single-source enterprise Generative AI platform. While this requires initial standardization efforts, it can significantly reduce operational complexity, ongoing maintenance costs, and associated risks in the long run. It also facilitates consistent and scalable deployment of Generative AI across the organization.
A hybrid approach that combines internal and external expertise might be the most effective strategy. Partnering with a leading technology provider can provide a solid foundation for a robust Generative AI platform. However, you’ll also need to build an internal team with expertise in data science, AI engineering, and other relevant fields. This team can then customize, expand, and manage the platform to meet your specific business needs.
For instance, HSBC, after piloting solutions with seven different Generative AI vendors, faced challenges with high maintenance costs, governance issues, and integration complexities. They decided to consolidate everything on Microsoft’s platform and standardize APIs, data flows, monitoring, and other aspects. This approach helped them reduce their AI operating costs by over 60%.
Conquering the Learning Curve
Finally, there’s the ever-present learning curve. CIOs understand the technical skills needed for Generative AI, such as model fine-tuning, vector database management, and application and context engineering. However, acquiring this knowledge can be a daunting process. Building all the specialized skills in-house can be extremely slow and challenging. Even with an accelerated learning curve, it could take months for an internal team to reach the required level of expertise.
Retail giant GiganteCorp allocated a significant budget of $15 million to assemble an elite team of 50 data scientists and engineers with experience in fine-tuning cutting-edge language models, application engineering, and vector knowledge bases. However, due to the high demand for these specialists in the market, they were only able to fill 40% of the positions after a year.
The lack of prior experience and the need to master new technologies can make implementing Generative AI seem like a formidable task. However, by partnering with an experienced technology partner, companies can overcome these challenges and unlock the full potential of Generative AI to transform their operations.
After several failed attempts to develop their own Generative AI models, the legal firm BigLaw partnered with experts from Anthropic. Their guidance in best practices, benchmarking, iterative refinement, and thorough testing enabled their contract review system to achieve over 95% accuracy in less than six months, a 30% improvement over previous attempts.
A specialized Generative AI partner can and should continue to provide ongoing consulting and support services, even after initial capabilities have been implemented within the organization. Inevitably, challenges, bottlenecks, or highly specific requirements will arise as Generative AI usage is deployed and scaled. Accessing the deep expertise of these consultants can be key to resolving them effectively.
The Generative AI models deployed by the fintech company Novo initially yielded excellent results in tasks such as fraud detection and customer support. However, after eight months, performance degradations began to be observed as data patterns shifted. They had to implement continuous data retraining and recycling pipelines to maintain accuracy levels.
In conclusion, Generative AI systems are not one-time projects; they require continuous refinement and updating. Adopting a mindset of constant testing, learning, and improvement based on feedback and empirical data is crucial for maximizing the long-term value of Generative AI.
Get in Touch!
Francisco Ferrando
Business Development Representative
fferrando@huenei.com