Get the latest tech news
AI agents are hitting a liability wall. Mixus has a plan to overcome it using human overseers on high-risk workflows
Mixus's "colleague-in-the-loop" model blends automation with human judgment for safe deployment of AI agents.
For example, a large retailer might receive weekly reports from thousands of stores that contain critical operational data (e.g., sales volumes, labor hours, productivity ratios, compensation requests from headquarters). To build a fact-checking agent for reporters, for example, co-founder Shai Magzimof simply described the multi-step process in natural language and instructed the platform to embed human verification steps with specific thresholds, such as when a claim is high-risk and can result in reputational damage or legal consequences. Combined with integrations for other enterprise software like Jira and Salesforce, this allows agents to perform complex, cross-platform tasks, such as checking on open engineering tickets and reporting the status back to a manager on Slack.
Or read this on Venture Beat