On this page
AI Applications
Planning AI Applications
From the book AI Engineering
Use Case Evaluation
Question: why do you want to build this application?
- Competitive Pressure: if you don’t build will competitors with AI driven capabilities make you obsolete?
- Leverage Bigger / New Opportunities: if you don’t build will you miss opportunities to drive increased revenue and profit; take advantage of new opportunities
- Understand New Technologies: if not sure how AI impacts the business, at least get an understanding of the impact
Think about the build vs buy paths
Role of AI and Humans in the AI Application
- Critical or Complementary: if an application continues to work or can work with AI, AI is complementary to the application
- Reactive or Proactive:
reactive: generates responses after user interactions (e.g. Chatbot)proactive: generates/surfaces response without user interaction (e.g. traffic alerts)
- Dynamic or Static: dynamic features are updated with usage/feedback; static features are updated periodically (manually driven?)`
Setting Expectations
What does success look like? * define important metrics (e.g. percentage of automated customer responses)
Types of metrics
- Quality Metrics
- Latency Metrics
- Cost Metrics
- Other Metrics (e.g. interpretability, fairness)
Milestone Planning
- Evaluate existing models: can you use existing models?
- Reasess goals after evaluation
- Determine if you can get to the last mile: from POC results to production impact
Maintenance
- How will the product change over time (given a fact moving AI space)?
- Determine if you can take advantage of model improvements
- Assess impact of IP usage (AI models trained on potentially proprietary data)
AI Stack
Three layers to any AI application
- Application Development
- AI interface
- Prompt engineering
- Context construction
- Evaluation
- Model Development
- Inference optimization
- Dataset engineering
- Modeling & training
- Evaluation
- Infrastructure
- Compute management
- Data management
- Serving
- Monitoring
Pre-Training, Finetuning, & Post-Training
- Pre-Training: training a model from scratch (model weights are randomly initialized)
- very resource intensive
- Finetuning: continue to train a previously trained model
- done by application developers
- Post-training: similar to finetuning
- done by model developers
Application Development
- Evaluation: assess to mitigate risks and uncovering opportunities
- Prompt Engineering & Context Construction
View this page on GitHub