"

7 Prompt Engineering for Large Language Models

Learning Objectives

  • Explain how large language models interpret instructions and why clarity, context, and constraints matter more than clever phrasing.

  • Apply task-framing techniques to guide LLMs in supporting analysis, communication, and problem-solving tasks.

  • Use iterative interaction, follow-up questions, and evaluation to refine AI-generated outputs effectively.

  • Leverage perspective-taking and structured prompts to explore complex business issues and stakeholder viewpoints.

  • Distinguish between appropriate and inappropriate uses of LLMs in business decision-making contexts.

  • Integrate LLM outputs into business workflows while maintaining human judgment, accountability, and ethical responsibility.

  • Recognize the limits of prompting and articulate why governance and oversight are essential for responsible AI use.

Interacting effectively with Large Language Models: From prompting to task framing

Why “prompt engineering” is evolving

 

From early prompt engineering to clear, contextual, and iterative interactions.When large language models first became widely accessible, users quickly discovered that small changes in wording could lead to dramatically different outputs. This led to the idea of prompt engineering—the practice of carefully crafting inputs to coax better responses from AI systems. At the time, this focus was both necessary and practical, as early interactions with LLMs were sensitive to phrasing and lacked strong instruction-following capabilities.

As LLMs have matured, however, their ability to understand natural language instructions has improved significantly. Modern models are more robust, conversational, and capable of handling ambiguity, multi-step tasks, and iterative refinement. As a result, success in working with LLMs now depends less on discovering clever prompt formulations and more on clearly framing goals, providing relevant context, and evaluating outputs critically. The emphasis has shifted from engineering prompts to designing effective interactions.

This evolution reflects a broader change in how AI systems are used in organizations. LLMs are no longer experimental tools operated by specialists; they are becoming general-purpose interfaces embedded in workflows across marketing, operations, human resources, analytics, and strategy. In these settings, the key skill is not technical precision in prompt wording, but the ability to communicate intent, constraints, and expectations in ways that align AI outputs with business objectives.

For business students and practitioners, this shift has important implications. Interacting effectively with LLMs requires the same foundational skills used in managing people and processes: defining objectives, setting boundaries, asking good questions, and exercising judgment. Throughout this chapter, the focus will therefore be on practical techniques for guiding, refining, and evaluating LLM outputs—treating AI not as a magic system that responds to perfect prompts, but as a powerful tool that amplifies the quality of human thinking and decision-making.

How large language models interpret instructions

Large language models interpret instructions through patterns in language rather than through understanding, reasoning, or intent in the human sense. When an LLM receives a prompt, it analyzes the text as a sequence of tokens and estimates the most likely continuation based on patterns learned during training. This means the model responds to how a request is framed—the goals implied, the constraints stated, and the context provided—rather than to an underlying purpose or truth. As a result, the clarity and structure of instructions matter more than specialized terminology or technical phrasing.

LLMs interpret user input of intent, context, and constraints using patterns, not understanding to produce good outputs.LLMs are particularly sensitive to three elements of an instruction: intent, context, and constraints. Intent signals what the user is trying to accomplish (for example, summarizing, brainstorming, analyzing, or drafting). Context provides relevant background or assumptions that shape the response. Constraints define boundaries such as tone, length, format, audience, or perspective. When these elements are explicit, LLMs tend to produce outputs that are more relevant, coherent, and useful. When they are missing or vague, the model will still generate a response—but one that may not align with the user’s expectations.

Importantly, LLMs do not evaluate whether an instruction is reasonable, ethical, or correct. They attempt to comply with the request as written, even if the task is underspecified or internally inconsistent. This places responsibility on the user to frame requests thoughtfully and to evaluate outputs critically. Understanding how LLMs interpret instructions is therefore less about learning special prompting techniques and more about developing strong communication and problem-framing skills—capabilities that are equally valuable when working with human collaborators.

Task Framing: The Most Important Skill When Working with LLMs

The single most important skill for interacting effectively with large language models is task framing—clearly defining what you want the system to help you accomplish and under what conditions. Unlike traditional software, LLMs do not operate through menus or fixed commands. They respond to natural language descriptions of goals, constraints, and expectations. As a result, the quality of an LLM’s output is closely tied to the quality of the task as it is described.

Framework for task framing.Effective task framing typically includes several core elements. First, the goal clarifies the desired outcome, such as summarizing a report, generating ideas, or analyzing a situation. Second, context provides relevant background information the model should consider, including assumptions, prior decisions, or situational details. Third, constraintsdefine boundaries on the response, such as tone, length, format, audience, or time horizon. In some cases, specifying a role or perspective—for example, asking the model to respond as a financial analyst or customer service manager—can further focus the output. Finally, strong task framing anticipates evaluation, prompting the user to assess whether the response meets the original objective.

To illustrate, consider the difference between a loosely framed request and a well-framed task. A prompt such as “Help me improve our customer service” provides little guidance and may result in a generic or unfocused response. By contrast, a framed task such as “Suggest three practical ways a mid-sized online retailer could reduce customer support response times, focusing on low-cost process improvements rather than new technology investments” gives the model a clear objective, relevant context, and meaningful constraints. The latter is far more likely to produce actionable insight.

Task framing is not about writing perfect prompts on the first attempt. It is an iterative process that mirrors effective managerial communication. Managers routinely refine objectives, clarify constraints, and ask follow-up questions when working with teams. Interacting with LLMs works in much the same way. By treating AI as a collaborator that responds to clear goals and feedback, users can consistently obtain more relevant, useful, and trustworthy outputs.

Iterative interaction: Refining and evaluating LLM outputs

Improving LLM output requires human evaluation and iteration.Effective use of large language models is rarely a one-step process. Even when a task is well framed, initial responses often benefit from refinement through follow-up questions, clarification, and evaluation. LLMs are designed to operate conversationally, allowing users to build on prior outputs, narrow focus, and adjust direction over multiple turns. This iterative interaction is one of their most powerful features and closely mirrors how humans collaborate to solve complex problems.

Follow-up prompts allow users to deepen, revise, or redirect an LLM’s response. A user might ask for greater detail, request an alternative perspective, impose new constraints, or challenge assumptions in the initial output. For example, after receiving a set of strategic recommendations, a user might follow up by asking the model to prioritize options, assess risks, or tailor suggestions to a specific organizational context. Each interaction provides additional information that helps the model generate more relevant and targeted responses.

Equally important is the evaluation of outputs. LLMs do not assess their own accuracy or relevance; they generate responses that appear plausible based on patterns in language. Users must therefore evaluate whether the output aligns with the original goal, whether key assumptions are reasonable, and whether important information is missing or incorrect. This evaluation step reinforces the role of human judgment and prevents overreliance on AI-generated content.

Iteration also encourages better task framing over time. As users see how an LLM responds, they learn which details matter, which constraints need to be clarified, and how to phrase objectives more effectively. Rather than viewing imperfect outputs as failures, effective users treat them as feedback that informs the next interaction. This process transforms LLM use from a static request-response model into an active dialogue that supports thinking, analysis, and decision-making.

Using Perspective and structure to Support better thinking

Ask the LLM to approach a problem from different perspectives.One of the most effective ways to use large language models is to deliberately ask them to explore problems from multiple perspectives or within a defined analytical structure. Because LLMs are trained on a wide range of viewpoints and discourse styles, they can quickly surface alternative interpretations, stakeholder concerns, and trade-offs that might otherwise be overlooked. When used thoughtfully, this capability supports deeper analysis and more balanced decision-making.

Perspective-based interaction involves asking an LLM to examine an issue through the lens of specific roles, stakeholders, or contexts. For example, a manager might ask how a proposed policy change would be viewed by customers, employees, regulators, or investors. Similarly, an LLM can be prompted to respond as a marketing manager, operations leader, or financial analyst, allowing users to explore how priorities and concerns shift across organizational roles. This technique is particularly useful for anticipating resistance, identifying unintended consequences, and preparing for discussions with diverse audiences.

Structured prompts further enhance the quality of analysis by imposing an explicit framework on the model’s response. Asking for pros and cons, risks and benefits, short-term versus long-term impacts, or ethical and economic considerations encourages the LLM to organize its output in a way that supports comparison and evaluation. These structures do not guarantee correctness, but they help ensure that important dimensions of a problem are considered systematically rather than implicitly.

It is important to recognize that perspective-taking and structured analysis do not replace critical thinking. The LLM does not determine which viewpoint is most appropriate or which trade-offs should be accepted. Instead, it acts as a cognitive aid—surfacing possibilities, organizing information, and accelerating sensemaking. The responsibility for interpreting, prioritizing, and acting on these insights remains with the human decision-maker.

By using LLMs to explore perspectives and impose structure, students and practitioners can move beyond surface-level responses and engage more deliberately with complex business issues. This approach positions AI as a partner in analysis rather than an authority, reinforcing the central role of human judgment in organizational decision-making.

Controlling and integrating llm outputs into business workflows

For LLMs to be useful in organizational settings, their outputs must be easy to evaluate, share, and incorporate into existing workflows. One of the most effective ways to achieve this is by explicitly specifying the desired format and structure of the response. Rather than accepting free-form text, users can ask LLMs to produce tables, bullet points, summaries, outlines, drafts, or step-by-step explanations. LLMs can also produce formatted, downloadable files of information that can be opened in a word processor or spreadsheet.  Many can also produce good quality illustrations and images (like some used in this textbook).  Structured outputs reduce cognitive load, make review easier, and allow results to be reused in documents, presentations, or decision processes.

LLMs can produce many types of artifacts useful in business.Format control is particularly valuable when LLMs are used for analysis or planning. Asking for a comparison table, a prioritized list with brief justifications, or a short executive summary encourages the model to organize information in ways that support managerial decision-making. Similarly, specifying constraints such as word limits, audience level, or tone helps ensure that outputs align with professional expectations and organizational norms. These instructions do not require technical expertise; they reflect the same clarity and precision expected in effective business communication.

Integration also involves recognizing that LLM outputs are often intermediate artifacts rather than final deliverables. Drafts generated by an LLM may serve as starting points for reports, emails, policies, or presentations, but they should be reviewed, edited, and contextualized by humans before use. Treating AI-generated content as a first pass—rather than a finished product—reinforces accountability and improves overall quality.

Finally, output control supports consistency when LLMs are used repeatedly for similar tasks. By reusing structured instructions or templates, organizations can reduce variability in responses and improve reliability across teams and use cases. In this way, controlling output format becomes not just a prompting technique, but a mechanism for embedding LLMs more effectively into everyday business work.

What interaction and prompting cannot solve

LLMs can augment but cannot replace human decision-making.While effective task framing, iteration, and structure can significantly improve LLM outputs, there are important limitations that better prompting alone cannot overcome. Large language models do not verify facts, reason independently, or evaluate the consequences of their recommendations. No amount of careful phrasing can guarantee accuracy, completeness, or appropriateness in all contexts. Understanding these limits is essential to using LLMs responsibly in business settings.

Prompting cannot eliminate bias or ethical risk. LLMs reflect patterns in their training data, and while careful instruction can reduce undesirable outputs, it cannot fully remove underlying biases or ensure fairness across all situations. Similarly, prompting does not confer domain authority. An LLM may generate responses that sound confident and well-reasoned even when they are incorrect or poorly suited to a specific industry, regulatory environment, or organizational context.

Prompting also cannot replace human accountability. Decisions that affect employees, customers, finances, or compliance require judgment, contextual awareness, and responsibility—qualities that remain firmly human. LLMs can support analysis, surface options, and accelerate communication, but they do not assume ownership of outcomes. Treating AI-generated content as authoritative rather than advisory increases organizational risk.

Recognizing what interaction techniques cannot solve helps position LLMs appropriately within business processes. Their value lies in augmenting human thinking, not in bypassing it. By pairing effective interaction with critical evaluation, governance, and oversight, organizations can leverage LLMs productively while maintaining control, responsibility, and trust.

from prompting skills to organizational capability

As large language models become embedded in business processes, the ability to interact effectively with them extends beyond individual skill and becomes an organizational capability. While this chapter has focused on how individuals frame tasks, iterate, and evaluate outputs, the broader challenge for organizations is ensuring that these practices are applied consistently, responsibly, and in alignment with business objectives. The value of LLMs is realized not through isolated interactions, but through thoughtful integration into workflows, roles, and decision-making processes.

Organizations that use LLMs effectively establish shared norms for how AI-generated content is created, reviewed, and applied. This includes defining appropriate use cases, setting expectations for human oversight, and providing guidance on when AI support is advisory versus inappropriate. Over time, teams may develop reusable task templates, structured prompts, or standardized output formats that reduce variability and improve reliability. In this way, interaction practices evolve from ad hoc experimentation into repeatable processes.

Importantly, developing organizational capability also requires reinforcing accountability and judgment. LLMs can accelerate analysis and communication, but responsibility for decisions remains with people. Managers must ensure that AI outputs are interpreted in context, validated when necessary, and used in ways that align with ethical, legal, and strategic considerations. This shift—from using AI as a novelty to treating it as an integral part of work—mirrors earlier waves of digital transformation.

In the chapters that follow, these interaction skills provide a foundation for understanding more advanced uses of AI, including workflow automation, agent-based systems, and governance frameworks. As AI systems become more capable and autonomous, the principles introduced here—clear intent, structured interaction, evaluation, and accountability—remain central. Effective use of LLMs is ultimately not about mastering prompts, but about designing thoughtful human–AI collaboration within organizations.

Chapter Summary

This chapter reframed prompt engineering as the broader skill of interacting effectively with large language models (LLMs) in business contexts. Rather than emphasizing clever wording or specialized syntax, the chapter highlighted how successful use of LLMs depends on clearly framing tasks, providing relevant context, specifying constraints, and critically evaluating outputs. As LLMs have become more capable and conversational, the quality of human interaction—rather than the technical precision of prompts—has become the primary determinant of value.

The chapter explained how LLMs interpret instructions based on patterns in language rather than understanding or intent, underscoring the importance of clarity, structure, and iteration. Techniques such as task framing, follow-up questioning, perspective taking, and structured outputs were presented as practical ways to support analysis, communication, and decision-making. These approaches position LLMs as cognitive support tools that augment human thinking rather than replace it.

At the same time, the chapter emphasized clear boundaries. No amount of prompting can guarantee correctness, eliminate bias, or transfer accountability to an AI system. Effective use of LLMs therefore requires human judgment, oversight, and responsibility, particularly in high-stakes or organizational settings. By treating interaction with LLMs as a managerial and organizational capability—rather than a technical trick—students are better prepared to use AI thoughtfully, responsibly, and productively as these systems continue to evolve.

Discussion Questions

  1. In what ways is interacting with an LLM similar to managing a human team member? In what ways is it fundamentally different?
  2. Why might a well-structured but incorrect AI response be more dangerous than an obviously flawed one in a business setting?
  3. Consider a business decision you’ve made recently. How could an LLM have supported your thinking without replacing your judgment?
  4. When, if ever, is it inappropriate to use an LLM for assistance—even if the output appears accurate and well written?
  5. How can perspective-based prompting help organizations anticipate resistance or unintended consequences of strategic decisions?
  6. What risks arise when organizations treat AI-generated outputs as final deliverables rather than intermediate drafts?
  7. How should organizations balance efficiency gains from LLMs with the need for human accountability and ethical oversight?
  8. In your view, which is harder to teach: technical prompting skills or critical evaluation of AI outputs? Why?
  9. As AI systems become more autonomous and embedded in workflows, which principles from this chapter will remain most important—and why?
  10. How does reframing “prompt engineering” as task framing and interaction design change how managers should think about using AI at work?