Coming Soon – Protégé Ask: Survey of Laws and Regulations


April 27, 2026

Lexis+ has announced a new Protégé Ask feature: Survey of Laws and Regulations which was only available to faculty as of March 2026. When a user enters a prompt that includes phrases such as “50-state survey,” “Which states,” “What states,” or “State survey,” Protégé suggests any necessary refinements then returns an interactive table with expandable AI-generated summaries and linked citations.

To see how this new feature performs, let’s compare it with Westlaw’s AI Jurisdictional Surveys. We’ll start in Protégé Ask with the following prompt: Create a 50-state survey concerning IIED.

Ask Screen - Lexis+

Not surprisingly, since this prompt was fairly broad, Protégé responded by asking for a more specific survey topic:

I selected the first option: Please draft a 50-state survey summarizing the law on intentional infliction of emotional distress (IIED) in all U.S. jurisdictions which Protégé tagged as my original prompt. After the selection, I was warned that:

For the most part, the survey returned that there were “no statutes or regulations identified” with a few exceptions. For example, Louisiana provided this result:

Let’s compare these results with Thomson Reuters’s AI Jurisdictional Surveys using the same prompt: Create a 50 state survey concerning IIED.

We get a similar warning about how long the results may take, but for this prompt, AI Jurisdictional Surveys returned its survey much faster than Protégé Ask.

If we compare the results, AI Jurisdictional Surveys returned results for more states and territories than Protégé, but the two platforms also produced different results for several jurisdictions.

For example, California:

AI Jurisdictional Surveys
Protégé Ask

AI Jurisdictional Surveys noted that the “statutes do not provide a direct definition of IIED,” then provided additional potentially relevant information while Protégé provided that “no statutes or regulations identified.”

For Minnesota and Mississippi:

AI Jurisdictional Surveys
Protégé Ask

AI Jurisdictional Surveys did not have an entry for Mississippi, but for Minnesota, “assum[ed] the query seeks guidance on […] conduct relevant to IIED.” Protégé provided entries for both states.

How does that compare to Thomson Reuters’ (non-GenAI) Jurisdictional Surveys? Unlike the AI tools, Jurisdictional Surveys rely on Westlaw’s index tags to account for differences in terminology across jurisdictions. That process produces a curated list of potential synonyms designed to help ensure more comprehensive results. In this case, that means the survey should capture statutes that use the term ‘intentional infliction of emotional distress’ as well as statutes with terms using closely related language. Here are our results:

Jurisdictional Surveys

Unlike AI Jurisdictional Surveys and Protégé Ask, Jurisdictional Surveys returned results for all fifty states, likely because it relied on a broader set of potential synonyms. So which method would work best for a potential researcher? That depends on what the researcher was trying to find, but here are some key takeaways from this brief foray into using GenAI for multi-state research:

  1. Prompt wording matters. When the platforms generated synonyms or alternate terms, those terms varied not only between Westlaw and Lexis, but also across the individual entries returned. A more specific prompt might have narrowed the results to statutes or regulations directly addressing intentional infliction of emotional distress, but the results suggest that the platforms did not interpret the legal phrase in exactly the same way from entry to entry.
  2. GenAI state survey tools are best used as research aids, not definitive answers. They can be helpful for identifying or confirming a potentially relevant statute. If the researcher’s goal was to capture every statute that could potentially apply, Jurisdictional Surveys returned the largest number of results. But more results do not necessarily mean that those results were more accurate, relevant, or a complete list of all potential state statutes. Each result would still need to be carefully reviewed.
  3. Platform design influences research outcomes. Because Westlaw and Lexis generated different synonyms and interpreted the same prompt differently, the results were influenced both by the researcher’s prompt and by the platform’s interpretation of that prompt. That means the same research question can produce meaningfully different results across tools, potentially causing relevant statutes to be missed depending on which platform you use.

AI jurisdictional survey tools can be a helpful starting point for a research project, but they should not be treated as definitive no matter which platform you use. The best approach would be to use clear prompts, compare results when possible, and verify any potentially relevant statutes through traditional research methods. (Secondary sources will always be your friends!) These tools can help you save time, but they do not fully replace your own legal research skills or judgment.