TL;DR: Most people struggle with AI because they treat it like a conversation. It is not a conversation; it is a System. Google recently highlighted the PASEC prompting framework, which transforms vague requests into precise instructions. If you analyze this framework, you realize it is identical to Systemic Design. Prompting is no longer an art; it is now a discipline of requirements engineering.
James here, CEO of Mercury Technology Solutions. Narita, Chiba, Japan - December 18, 2025
I often see smart engineers struggling to get good results from LLMs. They treat the prompt box like a search bar or a chat window. They type: "Write me a report on cloud trends."
Then they complain when the output is generic garbage.
The problem isn't the model. The problem is a lack of System Definition.
Google recently formalized a high-quality prompting framework called PASEC. On the surface, it looks like just another acronym. But if you look deeper, it is a blueprint for Systemic Design.
The PASEC Protocol
To control a stochastic (random) system like an LLM, you must define the boundary conditions. PASEC does exactly that:
- P | Persona (Who): Defining the agent's role.
- A | Aim (What): Defining the specific mission or functional requirement.
- S | Structure (How): Defining the output interface and format.
- E | Effective/Constraints (Boundaries): Defining what not to do.
- C | Context (Input): Defining the environmental variables.
Why This is Actually Systems Engineering
If you have a background in Systems Engineering or Product Architecture, this structure should look hauntingly familiar.
An LLM is a Black Box System. It takes inputs and processes them into outputs. In traditional engineering, you would never ask a factory to "build a car." You would provide a spec sheet.
PASEC is simply the spec sheet for the Intelligence Age.
Let’s map the logic:
- Persona = System Architecture:
- Engineering: "This is a high-torque electric motor."
- PASEC: "You are a Senior Systems Architect with 20 years of experience."
- Why: It sets the Operating Parameters and the knowledge base retrieval weights.
- Aim = Functional Requirements:
- Engineering: "The motor must lift 500kg."
- PASEC: "Your goal is to critique this code for security vulnerabilities."
- Why: It defines the Success Criteria.
- Context = Environmental Inputs:
- Engineering: "The motor operates in -20°C weather."
- PASEC: "The audience is non-technical board members; the company is facing a budget cut."
- Why: It provides the State Variables necessary for the system to process the logic correctly.
- Effective (Constraints) = Boundary Conditions:
- Engineering: "Do not exceed 240V; do not overheat."
- PASEC: "Do not use jargon. Limit response to 500 words. No moralizing."
- Why: In systems theory, constraints are more important than goals. They narrow the solution space from "infinite" to "useful."
- Structure = Interface Design:
- Engineering: "Output via a 3-pin plug."
- PASEC: "Output as a Markdown table with columns for Risk, Impact, and Mitigation."
- Why: It ensures the output integrates with the next step in your workflow (the downstream system).
Case Study: The "Vibe" vs. The "Spec"
Let's look at the difference between a "Chat" prompt and a "Systemic" prompt.
The "Chat" Prompt (Failure Mode):
"Help me write an email to clients about our new AI feature."
The PASEC "Systemic" Prompt (Success Mode):
- (P)ersona: You are a Senior Product Marketing Manager specializing in B2B SaaS.
- (C)ontext: We are launching "Mercury AI," which automates code verification. Our clients are CTOs who are skeptical of AI hallucinations.
- (A)im: Write a launch email that persuades them to book a demo. Focus on trust and security, not just speed.
- (E)ffective (Constraints): Max 200 words. Tone should be professional but urgent. Do not use buzzwords like "Game-changer."
- (S)tructure:
- Subject Line (3 options)
- Body (Problem -> Agitation -> Solution)
- CTA
Conclusion: You Are the Architect
The reason PASEC works is not because it's a magic trick. It works because it forces you to treat the AI as a computational component rather than a magic genie.
We are moving out of the phase of "Prompt Whispering" and into the phase of Prompt Engineering.
- Whispering is hoping the model understands you.
- Engineering is defining the system so well that the model has no choice but to be correct.
Next time you open Claude or ChatGPT, don't just talk. Design the system.
Mercury Technology Solutions: Accelerate Digitality.