FAQs
FAQ
Why are two users who asked the same question receive different answers?
The most common reason why this happens is that the two users chat histories contained different context prior to asking the same question. The Moveworks Assistant takes into account chat history when developing answers and even slight differences in how questions are both asked and how answers are responded to may result in the Moveworks Assistant answering differently. This allows the Moveworks Assistant to adapt its response to user queries with the most recent information from the previous conversation, and ensures that the user gets a conversational and situation-aware response.
Why does the Moveworks Assistant response vary for the same question by the same user?
The Moveworks Assistant has a flexible and powerful reasoning engine that takes into account different dimensions of inputs before it returns a result to the user. Because of this, it is able to handle many complex requests, and even call multiple plugins in a row to fulfill a user’s request. This level of flexibility is only made available by the reasoning engine. Just like talking to a human agent, we do not expect their answer to be exactly the same every time. You can try entering the same exact utterance to chatGPT, and notice that the response it gives varies every time. Many factors could affect the Moveworks Assistant response, such as context from previous turns, the users’ permissions, and more. When a user asks a question repeatedly, while the input utterance is exactly the same, the context varies. Therefore, the Moveworks Assistant picks up new information every time, and could return a different response every time.
What steps should I take to understand why there are different responses?
There are a few things you can do to inspect where and why the Moveworks Assistant provided a different answer to the same question:
- Look at the prior context of the two conversations. By inspecting the prior context of the two conversations you can find likely points of difference(s) in interactions between the two users that could highlight why the answers could be slightly different.
- Look at AI Reasoning Steps. Click the ℹ️ icon to look up both sets of AI reasoning steps. By comparing the steps, you can see if the Moveworks Assistant called different plugins to answer the question. Although this may not directly answer why the Moveworks Assistant did so, you can usually combine looking at the AI reasoning steps with the context history to hypothesize what happened.
- Look at the Citations in the Reference Page. The last place to check is to look at what information the Moveworks Assistant cited. It is possible that even when calling the same plugin, the Moveworks Assistant retrieved different pieces of information. If it does so, this could explain why the summary or answer is different. Again, this doesn’t explain why it did so, but just like with AI Reasoning steps, you can probably hypothesize why when looking at context history.
Using the copilot clearhistory command
When performing testing of different utterance variations, you may encounter problems with context creating different answers. Moveworks Assistant often treats repeated questions of the same type to mean that the previously provided option was not helpful. As a result, the Moveworks Assistant proactively changes its responses to try other solutions. If you are doing heavy testing, you may enable copilot clearhistory.
Reach out to Customer Success for more details and to get it enabled.
Why didn't the Moveworks Assistant return the expected resource (document/file/query result)?
These situations are difficult to assess without more context; however, there are a several troubleshooting questions that can guide you towards more clarity within a few minutes:
- What specific resource were you expecting? This is always the most important place to start as it grounds the conversation in the exact resource in question instead of a theoretical one. Once the resource is identified, you can check a couple of things:
- Check Moveworks Setup for Ingestion: To see if that resource is ingested. The resource could be from a knowledge system not ingested, or a different repository of a currently ingested knowledge system that Moveworks Assistant doesn’t have access to. If the resource isn’t ingested, then that would explain why it is not serving.
- Check Moveworks Setup for ACLs: You can check ACLs as there could be an issue with the Moveworks Assistant ingesting updated ACLs or respecting them. If an issue is found, contact Moveworks Support.
- Review the contents of the document: Open up the document and review the content of the doc to align on whether it has the information that answers the question. Sometimes the content is not written clearly enough or the content could be interpreted in different ways. If inspecting this reveals this could be the case, Moveworks suggests to edit the article to make it clearer and then see if it serves when the question is asked (although you should be aware of context when re-testing, explained in more detail above)
- Did you check the citations panel for citations? Look at other returned docs in citations to see if it is reasonable for one to be in conflict and get used for summarization. If there are two or more competing articles that could answer the question, the Moveworks Assistant may not know which is the “preferred” one. In situations like this it is best to consolidate answers into a single resource that can be cross-linked in other docs in order to avoid this conflict.
- Is this issue present in Moveworks Classic as well? You can leverage the “Assistant Switch” command and see if the resource returns as expected when using Moveworks Classic. If it does, then it indicates that there is something different with how Moveworks Assistant is interpreting the resources. If it also fails to return, there may be an issue with annotations or relevance, which means Moveworks support needs to get involved. This is especially true if 1(a) is satisfied and the resource has been ingested properly.
How can you determine if the Moveworks Assistant response is a hallucination?
You may ask this question in a few different ways:
- “Why did the Moveworks Assistant return information that is not in our knowledge base?”
- “Why do I sometimes get responses without citations?”
- “The Moveworks Assistant gave a response and I don’t know where it came from?”
Before getting into the troubleshooting steps, it’s important to understand that hallucinations are generally difficult to troubleshoot, as there is no clear explanation as to why the Moveworks Assistant hallucinated. However, there is one important characteristic - Moveworks Assistant does not search the open internet.
Now, there are several troubleshooting questions that can guide you towards more clarity within a few minutes:
- Were no Plugins called? Check the AI Reasoning for the response. If no plugins were called, then the response was likely a hallucination.
- Are there no citations in the citations panel? If no citations, then this is the tell-tale sign of a hallucination. If there are citations, then the response is likely not a hallucination.
Are all hallucinations bad?
Depends on the application. The ability to create content with minimal effort is a very popular and useful application of LLMs. On the other hand, since the model is not using specific references, it may produce text that contains factually accurate information, but it may also mix up unrelated facts it has memorized or even make up facts or elements such as phone numbers, names, places etc, to fill in the gaps in its generated text. This can produce output which is off-topic, self-contradictory, or factually incorrect. Note that, in some cases, the model can produce a faithful reproduction of its training data, such that a well-informed user can attest to its accuracy.
Why is the Moveworks Assistant slow or experiencing latency issues?
If you are migrating from Moveworks Classic to Moveworks Assistant, you will see immediately that Moveworks Assistant is slower. The slowness stems from several factors:
- Generative AI bots are generally slower than those that leverage discriminative AI models.
- The enterprise guardrails / conversation safeguards that Moveworks has in our Assistant design such as toxicity filters, fact checking models, readability checks, citation checks, etc. increases latency in order to ensure factually accurate responses.
- Products like ChatGPT and Perplexity use a UI tactic of having their Assistants type out the responses as it generates the text. For some queries, this gives the perception that the answer is faster to generate. Due to limitations with both Slack and Microsoft Teams, Moveworks is unable to mimic this behavior.
For Channel Resolver, there is high latency from the user asking the question in the channel to the Moveworks Assistant reaching out via DM. The vast majority of requests in channel take less than 30 seconds to fulfill. Users who post in the channel are not expecting instantaneous reach outs from the Moveworks Assistant since sometimes the Assistant doesn’t respond at all when it cannot help the user. Furthermore, the Moveworks Assistant will still reach out faster, on average, than an agent monitoring the channel.
Does adding more users create higher latency in Moveworks Assistant?
No, adding more users does not increase the latency of the Moveworks Assistant’s responses to users. More users could theoretically increase timeouts, but Moveworks protects against this by ensuring enough GPU capacity and infrastructure to handle large volumes of users. Moveworks has multiple customers with over 100,000 users on the Assistant platform, including one customer with over 500,000 users.
How does the Moveworks Assistant use dates?
Sometimes, the Moveworks Assistant was not aware of a future date, or would reference a date that has already passed as the “next” date for something like a holiday (ex., the Assistant responding that the next day is July 4th when today is August 4th).
While Assistant is aware of today’s date, it does not always use it reliably for answers about “next holiday.”
There may be other reasons that further cause the Moveworks Assistant to struggle with dates:
- Multiple knowledge sources with conflicting dates: There can be conflicting info within multiple docs, knowledge articles, and FAQs referencing the same dates or holidays. But sometimes those docs are out of date (maybe there is 2023 and 2024 company holidays article ingested), or contain conflicting information. This may confuse the Moveworks Assistant and cause it to reference multiple sources to create a single summary. If those sources are in conflict, the summary may be wrong.
- Dates formatted in tables: Often, this happens with KBAs that show a collection of dates (like company holidays) in a tabular format. Due to the way HTML table and cell contents render when ingested, the Moveworks Assistant may struggle to properly associate columns and rows of the table. It is best to not list collection of dates in tables, but to list out dates where the holiday and day can be clearly associated with each other (ex., “Independence Day, July 4, 2024”)
- Inconsistent abbreviation: Sometimes happens from the use of abbreviations for months like “Sep.” or “Sept.” for September. These abbreviations may make it difficult for the Moveworks Assistant to understand the exact dates. It is best to use the full name of the month in articles.
It is also interesting to note that other Assistants experience this same issue and it is not unique to Moveworks Assistant.