A forward-looking startup set an ambitious goal to develop a chatbot that could answer queries based on documents, websites, PDFs, presentations, and a graph database, all without using OpenAI, Anthropic, or any cloud vendor Large Language Models (LLMs). This task presented a challenge: finding an alternative to ubiquitous AI tools and executing it cost-effectively, all while maintaining control of fine-tuning/context selection, avoiding vendor lock-in, and ensuring optimal performance even on CPUs.
candido.ai accepted this challenge, leveraging their AI expertise to design an appropriate solution. They selected, implemented, and benchmarked various open-source LLMs, ensuring the models could run both on cloud platforms and CPUs. They created a unique model training framework, incorporating vector database capabilities, using ChromaDB and FAISS to facilitate efficient context selection and answer queries accurately based on the input dataset. This approach granted them complete control over fine-tuning the models, enabling the chatbot to provide accurate responses based on diverse sources, from documents to complex graph databases.
With candido.ai's innovative use of open-source models and vector databases, the startup successfully developed its envisaged chatbot. The chatbot was capable of precisely answering questions based on an extensive array of information sources, performing optimally both on cloud platforms and CPUs, thus delivering significant cost efficiency.
By ensuring complete control of fine-tuning and context selection, the new chatbot avoided vendor lock-in and delivered accurate and relevant responses. This result validated the startup's ambitious vision and elucidated the potential of open-source AI models in creating versatile, cost-effective AI solutions while keeping the control in their hands.