Get the latest tech news
Ethically trained AI startup Pleias releases new small reasoning models optimized for RAG with built-in citations
Pleias emphasizes the models’ suitability for integration into search-augmented assistants, educational tools, and user support systems.
Now the company has announced the release of two open source small-scale reasoning models designed specifically for retrieval-augmented generation (RAG), citation synthesis, and structured multilingual output. Pleias-RAG models are described as “proto-agentic” — they can autonomously assess whether a query is understandable, determine if it is trivial or complex, and decide whether to answer, reformulate, or refuse based on source adequacy. Looking ahead, Pleias plans to expand the models’ capabilities through longer context handling, tighter search integration, and personality tuning for more consistent identity presentation.
Or read this on Venture Beat