
(T) There are now five ways to develop and launch a generative model:
- Use an open source model such as Open Assistant led by YouTube’s ML rock star Yannic Kilcher (now available with HuggingFace and marketed as HuggingChat) or Stable LM for LLMs or Stable Diffusion for computer vision from stability.ai
- Use directly the OpenAI GPT-3.5 Turbo APIs (GPT-4 APIs are still in limited beta/waiting list), and OpenAI chatGPT APIs for NLP, DALL-E APIs for image generation, or Whipser APIs for speech-to-text generation, and build your product from those APIs – that is for instance what Salesforce has done with Einstein GPT or Snapchat with My AI for their customers
- Use the Microsoft Azure Open AI service – note that you can “retrieve (your own data) and augment (it)” using Azure Cognitive Search so that you can ingest your enterprise data to the chatGPT prompt when developing a generative application
- Use Amazon Bedrock and have access to Claude from Antrop\c, Stable Diffusion from stability.ai, and Jurassic 2 from Amazon AI21labs
- Use the Google Cloud Vertex AI to access Google PaLM APIs and other foundation models from Google and DeepMind – and develop a generative apps front-end with Google Generative AI App Builder
Here is a fun video from Yannic introducing “en grande pompe” OpenAssistant:
References:
- “List of Open Sourced Fine-Tuned Large Language Models (LLM)“
- “The Practical Guides for Large Language Models“
- “Awesome-LLM“
Note: The picture above is from the Safer Vineyards in Napa Valley.
Copyright © 2005-2023 by Serge-Paul Carrasco. All rights reserved.
Contact Us: asvinsider at gmail dot com
Categories: Artificial Intelligence, Back-End, Deep Learning