A 10 minute crash course on AI fundamentals

A 10 minute crash course on AI fundamentals

đź‘‹ Welcome to New Vintage, a weekly taste of tech for wine professionals to level up with no-code and AI tools.


tl;dr Large Language Models (LLMs) are the "brains" for AI tools like ChatGPT. The term "AI" most commonly refers to Generative AI (a type of AI), where different types of models generate different types of output. Understanding models is the first step to identifying the right AI tools for your business.

AS A LEADER in the wine industry, you’ve likely heard the buzz about AI and tools like ChatGPT. But what exactly are these Large Language Models (LLMs) everyone is talking about? More importantly, how can they fit into your business in a practical, flexible way?

This article will break down LLMs in simple terms and help shift your perspective from viewing AI as just another piece of software to seeing it as a flexible, swap-in-and-out system of components. The goal is to empower you to make smarter decisions about AI – from choosing the right tools to spotting new opportunities for innovation in your winery.

If you wanted a crash course, this is for you.

What are LLMs? (In Plain English)

Large Language Models (LLMs) are a type of artificial intelligence that understands and generates human-like text. In essence, an LLM is an AI system trained on massive amounts of written content – imagine an AI that has read the entire internet, countless books, and articles. In doing so, it has learned the patterns of language from billions of words, across many topics. Thanks to this training, it can produce text that sounds surprisingly human. For example, an LLM can draft an email, answer questions, write a product description, or even hold a conversation.

Crucially, because they learn from so much data, LLMs can generalize – they can handle tasks or questions they weren’t explicitly taught. If you’ve used OpenAI’s ChatGPT or Anthropic’s Claude, you’ve interacted with an LLM. These models take an input (a prompt or question from you) and then generate a useful response based on everything they learned during training.

An easy way to think about it: a traditional program is like a recipe (fixed instructions), whereas an LLM is like a "chef" who has read every cookbook in the world – it has the knowledge to whip up something new on the fly (we’ll dive more into this difference next).

LLMs are part of a broader class of “foundation models”, meaning they serve as a base that can be adapted to many different tasks. Instead of building a new AI from scratch for each problem, businesses can start with a foundation model and fine-tune it (or prompt it) for their specific needs. This is a big change from older approaches where you’d make one AI model for each task. Modern LLMs provide a versatile foundation that can power chatbots, analyze documents, draft content, and more – all with the same core model. That flexibility is exactly why they’re grabbing headlines and why they matter for businesses like yours.

LLMs vs Traditional Software: A New Way to Think

It’s tempting to treat an AI model like just another software application – say, like your spreadsheet program or inventory system – but LLMs are fundamentally different from traditional software.

Traditional software is code-centric: a programmer writes explicit instructions telling the computer exactly what to do step by step. Thus, a traditional program can only do what it’s explicitly programmed to do. If you want it to handle a new scenario, a software developer has to write new code (think of a tool that needs a version upgrade or patch to gain new features).

LLMs, on the other hand, are data-centric. They aren’t programmed with step-by-step rules for every scenario. Instead, they learn patterns from data. During training, an LLM effectively “reads” a huge volume of text and develops its own internal model of knowledge – it figures out for itself how words relate, how sentences form, and how concepts might be clustered. So when faced with a new prompt, an LLM isn’t executing a pre-written script; it’s generating an answer based on how it maps a given input's relevancy to the internal knowledge. This is where the term Generative AI comes from.

What does this mean in business terms? Flexibility. Because an LLM isn’t limited by pre-programmed rules, it can adapt to a wide variety of inputs and questions. It’s not explicitly coded for each possible customer query or each type of report you ask it to summarize. It has a broad understanding, so it can handle unexpected requests more gracefully. In short, Generative AI can be more dynamic and adaptable than the old rule-based systems most of us are used to. It can be updated not by rewriting code, but by improving instructions, fine-tuning on new data or even by swapping in a more advanced model.

This leads to a crucial mindset shift: Don’t think of an AI model as a static product with version 1.0, 2.0, etc. Think of it as a modular component in your system that you can upgrade or switch out as needed. If a new model comes along that performs better, you can often replace the old one without overhauling your entire software stack. For example, today you might use Model A for your chatbot, and next year switch to Model B if it gives more accurate answers – much like swapping out a piece of equipment.

Your overall application (say, a customer service platform) remains, but the “brain” behind it (the LLM) can be changed. This is very different from traditional software, where changing the core algorithm would be a massive update. Embracing this modular mindset means your business can stay agile: you’re not locked into one AI tool forever, and you can adapt as your needs (or available AI models) evolve.

Choosing the Right AI Tool for the Job

Not all AI models are the same.

In fact, there are different types of models suited to different tasks, and part of leveraging AI effectively is choosing the right tool for your specific job. Further, two factors drive a lot of the categorization of models: size and modality.

The "size" of an LLM is typically represented in the number of parameters, with single digit billions being "small" and three digit billions being "large". The loose framework here is that the greater the number of parameters, the greater the model's theoretical capacity for nuance and subtleties.

In practical terms, bigger isn't always better. Smaller models can be faster, cheaper, and tuned to still perform as well for specific tasks.

"Modality", on the other hand, speaks to the type of data a given model was trained on, and therefore is best prepared to help with.

Here are a few key categories of models and when they’re relevant:

  • Text-Only LLMs: These models deal purely with text. They are ideal when your task involves reading or generating written content. For instance, answering customer emails, writing tasting notes or product descriptions, analyzing social media comments, or summarizing reports. Most well-known LLMs (like the one that powered the first release of ChatGPT) started as text-only. They’ve been trained on text and excel at language-related tasks. If your use case is exclusively text-oriented, a text-only LLM may be an efficient choice (especially if fine-tuning), though most foundation models are quickly advancing past just text.
  • Multimodal Models: Multimodal means the AI can handle multiple types of data – not just text but also images, audio, or even video. Initially, LLMs were only text-based and could not understand a picture or a voice clip. Now, newer models can be multimodal, meaning they can take an image as input alongside text, or produce image-based outputs. For example, OpenAI’s GPT-4o (and all newer models going forward) can "understand" images as well as text. In a wine business, a multimodal model might help with tasks like analyzing photos of vineyards (e.g., identifying grape health issues from images) and then providing a written report, or reading scanned documents and processing that information. If your business problem involves more than just words, you’d consider a multimodal model or a combination of a vision tool with an LLM. The key is to match the model to the data: text-only models for text tasks, multimodal for tasks mixing text + visuals.
  • Reasoning or Specialized Models: Some AI models (or configurations of models) are geared towards more complex reasoning, planning, or niche tasks. While general-purpose LLMs are very capable, certain applications benefit from specialization. For example, if you need an AI to perform complex logical reasoning or mathematical planning (say, optimizing delivery routes or creating a detailed financial projection), you might augment your LLM with additional tools or choose a model known for reasoning. Similarly, there are models fine-tuned for specific domains: an AI model fine-tuned specifically on chemistry might help a winemaker analyze lab results, or a model fine-tuned on legal case data might be better at surfacing legal precedent.

In practice, choosing the right AI tool might mean using a mix of models. You might use a big general LLM for interpreting user input, a different one to inspect images, and a smaller, cheaper model to classify whether a tweet about your wine is positive or negative.

Modern AI architecture allows these components to work in tandem. The flexibility of LLMs means you can integrate them with other systems: for example, an LLM can be the interface that explains the output of a more specialized analytics algorithm. The key for executives is to know that AI isn’t a single monolithic thing – it’s a toolbox. Pick the tool (or combination) that fits the job, and be ready to swap it out when a better tool comes along.

Implications for Business Strategy and Decision-Making

Understanding LLMs at a conceptual level helps in making smarter decisions about where to invest and how to innovate. Here are a few practical implications for how you strategize:

  • Flexible Infrastructure: Since AI models can be swapped in and out, it’s wise to build your AI projects with flexibility in mind. For example, if you’re integrating an LLM into your winery’s software (be it a CRM, e-commerce site, or internal dashboard), design it such that you can change the model behind the scenes. Maybe today you use an open-source model on your own servers, and tomorrow you switch to a new cloud API from a vendor offering a more advanced model. By keeping the system modular, your team can upgrade AI capabilities without rebuilding everything. This affects resource allocation: rather than pouring all your budget into one “forever” solution, you might allocate budget for ongoing model evaluations, updates, and experiments. Essentially, treat AI development as an iterative process, not a one-off software purchase.
  • Choosing the Right Opportunities: Knowing what LLMs can do (and the variety of models available) will help you spot high-impact use cases. Instead of applying AI everywhere blindly, you can identify areas where an LLM’s abilities align with business needs. For instance, if your customer service team spends hours answering similar questions, that’s a strong candidate for an LLM-powered chatbot. If your marketing team struggles to produce enough content, an LLM could help generate drafts. If you have piles of unstructured data (customer feedback, support requests, etc.) that no one has time to analyze, an LLM could mine those for insights. By understanding that AI models learn from data and can uncover patterns, you’ll start to see parts of your business in which data plus AI might yield better decisions or new strategies. Executives who grasp this can better prioritize AI investments – focusing on areas with rich data and clear ROI potential.
  • Evaluations ("Evals") are Crucial: If we assume a modular structure and an iterative process, having a strong way to measure differences in output becomes critical. This emerging competency is often referred to as "evals", where success with an AI system is increasingly measured by a set of test inputs and some rubric to determine the relative difference in quality of outputs. Sometimes this means a subjective bar of "better" (ex. does this customer service response sound on brand), sometimes it will be quite concrete (ex. how many physics questions did it get right). The more critical the use case, the more you'll want to evolve your approach to evals.
  • Resource and Talent Allocation: Bringing LLMs into your operations isn’t just a tech choice; it’s also about people and process. Since AI is not like traditional software, you might need to train your staff or even hire new talent to manage these systems. The good news is many AI tools are becoming user-friendly, and even non-engineers can leverage them with the right training. Still, at the strategic level, plan for capacity-building: ensure your team understands how to use AI tools, interpret their output, and maintain them. This means part of your budget and leadership attention should go to training and governance, not just buying a products.
  • Ethical and Quality Considerations: Finally, decision-makers should remain aware that AI flexibility comes with variance. Because LLMs learn from data, they can also learn biases or occasionally generate incorrect information (AI folks call this “hallucination” when the model makes something up). Having a modular approach means if one model is not performing or has issues, you can replace or fine-tune it – but you need processes to detect those issues. Allocate resources to monitoring AI outputs and setting guidelines (an AI policy) for its use. This ensures that as you swap models in and out, you maintain standards for accuracy, fairness, and quality. For example, if an AI model starts suggesting incorrect food pairings or misinterpreting customer tone, you’d want a human review process to catch that and decide what adjustments are appropriate.

In summary, understanding LLMs helps you move from a “set it and forget it” mentality to a more agile, ongoing strategy. You’ll budget not just for initial deployment, but for continuous improvement. You’ll view AI not as magic, but as another business asset – one that can give great returns if managed well. And importantly, you’ll be better equipped to drive AI initiatives that truly align with your business goals, rather than chasing hype.

For executives, that means AI is moving into a role much like a versatile team member or consultant – ready to be deployed in different roles as your needs change.

Key Terms to Remember

  • Prompting / Prompt Engineering – the process of refining instructions to an LLM to improve the output
  • Large Language Model (LLM)
    A type of AI model trained on vast amounts of text data to understand and generate human-like language. Examples include OpenAI’s GPT-4 and Anthropic’s Claude.
  • Foundation Model
    A broad term for large AI models trained on massive, diverse datasets that can be adapted (fine-tuned) to many different tasks. LLMs are one subset of foundation models focused on text.
  • Parameters
    The “knobs” or “dials” inside an AI model that get adjusted during training. A higher number of parameters often (but not always) means the model can capture more complex patterns.
  • Training Data
    The text, images, or other information an AI “reads” to learn language patterns. For an LLM, this might include books, websites, and articles from all over the internet.
  • Fine-Tuning
    The process of taking a general-purpose model and re-training it on a smaller set of specialized data (e.g., wine reviews, technical documents) to improve performance on specific tasks.
  • Multimodal Model
    An AI model capable of handling more than one type of data—such as text, images, and audio—rather than only text.
  • Prompt
    The text or question you feed into an AI model to get a result. In an LLM-based chatbot, your typed query is the prompt.
  • Hallucination
    When an AI model produces an answer that sounds plausible but is factually incorrect or entirely fabricated.
  • API (Application Programming Interface)
    A set of tools and protocols that let different software components talk to each other. Many LLMs are accessed through an API—think of it as the “gateway” for your application to send prompts and receive AI-generated answers.

Final Thoughts

The world of Generative AI can seem complex, but at its core it’s about tools that learn and adapt. For the wine industry – steeped in tradition yet always innovating – LLMs offer a new kind of versatility. By moving away from the old mindset of “install software, then wait for the next version,” and instead thinking of AI in a modular, ever-evolving way, you position your business to be more agile and competitive.

Start experimenting now, blend tools, and learn with your peers. In doing so, you’ll be better equipped to ride the AI wave to improve operations, delight customers, and uncover new opportunities in the dynamic landscape of the wine business.

Hope this was helpful,

Stephen