Trust Assessment
Updated: May 8, 2026
Trust Assessment is an AI-assisted review of a participant conversation. It gives each supported conversation an overall trust score, dimension-level reasoning, confidence, and recommendations so you can decide whether a response needs extra review.
Trust Assessment is still in beta. Treat it as decision support, not an automatic accept-or-reject system.
Where It Appears
Trust Assessment appears on conversation detail pages for ended conversations. If a trust result exists, the collapsed card shows the overall score. Expanding it opens three tabs:
- Summary - a short assessment that starts with "Trustworthy" or "Suspicious."
- Trust Dimensions - scored dimensions with reasoning and confidence.
- Recommendations - practical follow-up checks when the model sees uncertainty or risk.
Trust scores are also available to analysis sessions and automations. You can ask analysis questions that segment by trust score, and you can use trust score ranges as automation conditions.
When It Runs
Perspective's post-conversation workflow runs trust assessment for completed and partial conversations. Abandoned conversations are skipped by the workflow.
If a completed conversation does not have a trust result yet, the conversation page can request one. Existing trust results are reused unless you choose Re-run Assessment.
What It Reviews
The trust prompt reviews the research outline and the participant conversation together. Depending on what was captured, the assessment can consider:
- Participant identity and provided contact information
- Claimed role, experience, or eligibility
- Response depth, consistency, and relevance
- Captured technical context when available
- Participant metadata from URL Parameters, invite context, embeds, or connected channels
- The research objective and any qualification criteria in the outline
- Voice transcript context, while accounting for possible transcription errors
Dimension names vary by conversation. The UI can show dimensions such as identity, technical consistency, response quality, professional credibility, and eligibility fit when the assessment returns them.
Scores and Filters
Each assessment includes:
- Overall score from 0 to 100
- Dimension scores from 0 to 100
- Confidence of low, medium, or high
- Reasoning for each dimension
- Recommendations for manual checks
In the Conversations tab, the status filter uses trust score to separate normal completed conversations:
- Valid includes completed normal conversations with trust score 70 or higher, plus completed normal conversations without a trust score yet.
- Suspicious includes completed normal conversations with trust score below 70.
Analysis search also supports trust tiers:
- High: score 80 or higher
- Medium: score 50 to 79
- Low: score below 50
These thresholds are filters, not final research rules. Adjust your review process to the stakes of the decision.
How to Use It
Use Trust Assessment to:
- Prioritize manual review for suspicious or low-confidence conversations
- Segment analysis by trust score, trust tier, or participant type
- Send low-trust responses to a team review channel with Automations
- Compare trust score against structured fields such as qualification, company size, or use case
- Decide which responses should be included in high-stakes reporting
Best Practices
- Read the summary and dimensions before excluding a response.
- Treat low confidence differently from low trust. Low confidence often means the model did not have enough evidence.
- Check cited conversation content, participant metadata, and form data before making final decisions.
- Use trust score filters to focus review, then document your inclusion or exclusion criteria outside the tool when your research requires an audit trail.
- Re-run assessment when transcript edits or other record changes affect the evidence.
Related Docs
- Conversation Results - where trust cards appear in the results workflow.
- Analysis Sessions - ask questions by trust score or trust tier.
- Automations - route conversations based on trust score ranges.
- Track Participant Context - capture metadata that makes trust review more useful.