Get the latest tech news
When ChatGPT summarises, it does nothing of the kind
One of the use cases I thought was reasonable to expect from ChatGPT and Friends (LLMs) was summarising. It turns out I was wrong. What ChatGPT isn’t summarising at all, it only looks like it…
The paper itself, for instance, argues that ‘regulatory strategies’ (rules and compliance) do not really work all that well (e.g. costly, not agile), and that thus an organisational governance structure should be strengthened. I have been sitting on a post that suggests we should look at LLM outputs as being influenced by three recognisable elements: the parameter volume, the training data, and the context. In case you found any term or phrase here unclear or confusing (e.g, I can understand that most people do not immediately know that ‘context’ in LLMs is whatever has gone before in human (prompt) and LLM (reply) generated text, before producing the next token), you can probably find a clear explanation there.
Or read this on Hacker News