--- title: Designing Conversation Processes deprecated: false hidden: false metadata: robots: index --- As an agent engineer, your responsibility is to design the best context for your AI agents. * **Functionality**: What operations are possible? * **ML Performance**: How accurately does the Reasoning Engine interpret and use your plugin? * **User Clarity**: How easy is it for users to understand what happened? * **Latency**: How fast does the plugin execute (both execution time and end-to-end user experience)? * **Maintainability**: How easy is it to modify or extend? This guide presents the **work backwards methodology**: start from your goal and trace dependencies back to their sources. For each architectural decision, you'll see the tradeoffs explicitly—what you gain, what you sacrifice, and when to choose each approach. # Work Backwards from the Goal For every plugin: 1. **Identify the core action** (the "goal" – the final API call) 2. **List all input arguments** (just list them first) 3. **Trace each input to its source:** * **Meta info** (current_user, current_date, etc.) * **User provides** (becomes a slot) * **Another action** (trace that action's inputs recursively) 4. **Repeat step 3** for any dependent actions until everything traces to meta info or slots 5. **Design slot collection** (this is where context engineering decisions happen) **You're done when:** All inputs are either meta info, slots, or outputs from actions whose inputs are meta info/slots. ```mermaid flowchart TB A[Core Action] --> B[List Input Arguments] B --> C[Trace Each Input] C --> D{Meta Info?} C --> E{User Provides?} C --> F{Another Action?} D --> G[Map from context] E --> H[Design Slot] F --> I[Trace dependencies] I --> C H --> M G --> M[All inputs traced] style A fill:#e8f5e9 style M fill:#c8e6c9 ``` **During slot collection design** (step 5), you make critical context engineering decisions. The next section explores these tradeoffs in depth. *** # Example 1: View Team PTO Requests ``` Core Action: GET /v1/scheduling/employee_timeoff/multi_read ``` First, list every input this action requires: | Input | Format | Required? | | -------------- | ------------ | --------- | | `employee_ids` | List[String] | Yes | | `start_date` | YYYY-MM-DD | Yes | | `end_date` | YYYY-MM-DD | Yes | | `state` | String | No | Now extend the table—where does each input come from? | Input | Format | Required? | Source | How to get it | | -------------- | ------------ | --------- | ------------------ | ------------------------------------ | | `employee_ids` | List[String] | Yes | **Another Action** | Convert user emails → UKG system IDs | | `start_date` | YYYY-MM-DD | Yes | **User provides** | User specifies date range | | `end_date` | YYYY-MM-DD | Yes | **User provides** | User specifies date range | | `state` | String | No | **User provides** | Optional filter (default: "All") | **Key insight:** `employee_ids` comes from another action. That means we're not done—we need to trace that action's inputs. The action that converts user emails to UKG employee IDs: ``` Action: Convert Users to UKG IDs API: POST /identity-graph/resolve ``` List its inputs, then trace them: | Input | Format | Required? | Source | How to get it | | ------- | ---------- | --------- | ----------------- | ------------------------------------------ | | `users` | List[User] | Yes | **User provides** | User specifies which team members (emails) | Now everything traces to user-provided data! **Complete tracing result** for the core action:
Input Source Chain
`employee_ids` Another Action Collected from an API that takes user email addresses. Will retrieve based on `target_persons` (below)
`target_persons` User provides Direct from user
`start_date` User provides Direct from user
`end_date` User provides Direct from user
`state` User provides Direct from user
All inputs now trace to "User provides" (either directly or through actions). **You're done with backtracing.**
All user-provided values are now slots: | Slot | Type | Description | | ---------------- | ----------------------- | ----------------------------- | | `target_persons` | List[User] | Team members to check PTO for | | `start_date` | Date | Start of date range | | `end_date` | Date | End of date range | | `state_filter` | String (default: "All") | Optional state filter | Now you can configure these slots with the appropriate data types. ***
# Example 2: Get Employee Timesheet ``` Core Action: GET_TIMESHEET ``` | Input | Format | Required? | | -------------------- | ------------ | --------- | | `employee_id` | Alphanumeric | Yes | | `last_day_of_period` | YYYY-MM-DD | Yes | Extend the table—where does each input come from? | Input | Format | Required? | Source | How to get it | | -------------------- | ------------ | --------- | ----------------- | --------------------------- | | `employee_id` | Alphanumeric | Yes | **Meta info** | Current user (from context) | | `last_day_of_period` | YYYY-MM-DD | Yes | **User provides** | User specifies the period | **You're done with backtracing.** No dependent actions needed (skip Step 4) ## Step 5: Design Slots Only one slot needed: | Input | Format | Constraints | | ---------------- | ---------- | ------------------------------------- | | `requested_date` | YYYY-MM-DD | Day must be `15` OR last day of month | **This is where design decisions matter.** How do you enforce the constraint? ### Implementation Approaches **Approach 1: Slot Description Only** Just describe the constraint in the slot description. * ❌ **UX:** No user-facing validation. Relies on the API to throw a 400. Forces the user to "restart" * ❌ **ML Performance:** Reasoning Engine might not follow the rule consistently * ✅ **Maintainability:** Simplest implementation * ✅ **Latency:** Fast – no extra operations **Approach 2: Slot Validation Policy** Add a validation rule using [DSL](https://help.moveworks.com/docs/moveworks-dsl-reference): ```python value.$PARSE_TIME().$ADD_DATE(0, 0, 1).$FORMAT_TIME("%d") == "01" OR value.$PARSE_TIME().$FORMAT_TIME("%d") == "15" ``` * ✅ **UX:** Early validation with clear error message * ✅ **ML Performance:** Reasoning Engine understands why validation failed * ✅ **Latency:** Low latency, no extra actions * ❌ **Maintainability:** DSL rule requires maintenance **Approach 3: Generate Structured Value Action** Use two activities in the conversation process: (1) Generate structured value (2) Get timesheet * ✅ **Developer Visibility:** Less hidden logic * ❌ **ML Performance:** Multiple activities → context bloat * ❌ **Latency:** Many LLM extra calls 1. LLM call from reasoning engine to pick the plugin 2. LLM call from the action activity 3. LLM call to parse the outputs from the activity & decide if it should be shown to the user 4. LLM call to run the timesheet action 5. LLM call to process the timesheet results
**Approach 4: Compound Action Validation** Validate inside a compound action before calling the API. * ✅ **ML Performance:** Fully abstract execution from reasoning * ❌ **UX:** User sees "processing" with bad input, then retry * ❌ **Latency:** Must wait for compound action validation * ❌ **Maintainability:** Additional asset to maintain
As an Agent Engineer, you will need to make the right choices based on your organization's preferences & style. In this case, we would recommend choosing Option 2 since it balances correctness, latency, & end-user experience well.