Jack Tolley - @TechTolley
JUXT - @juxtpro
High-level overview of LLMs and personal experience
Completion: Send text, get reply
Chat: String of completions
Agentic: Chat with knowledge base and tools
Powerful but expensive models
Improving rapidly, customizable
Dedicated chips for running LLMs
Capturing semantic meaning, leveraging data
Retrieval-Augmented Generation
Optimizing prompts for better performance
Techniques for better control and results
Provide context and input in a human-understandable format
Example: Using color names instead of RGB values
Expect some variation in output formats
Parse and convert outputs as needed
Break down tasks into multiple focused API calls
Chain models for different subtasks
Iteratively refine and improve solutions
Fed Figma design screenshots to ChatGPT
Generated a solid starting point
Used LLMs to learn new frameworks, APIs, and concepts
Shortcutted documentation and sample code process
Excels at generating basic code and examples
Used Copilot for intelligent code autocompletion
Generates test data, fills arrays, and more
Satisfying and time-saving experience
Consulted ChatGPT for DevOps tasks
Better than verbose or missing online documentation
Generates Dockerfiles, scripts, and deploy files
Integrated ChatGPT.nvim with Groq hardware
Blazing fast at 500 tokens/second
Currently free to use
For more experiments follow @TechTolley
For more webinars follow @juxtpro