Get the latest tech news
All AI models might be the same
what can language model embeddings tell us about whales speech, and decoding ancient texts? (on The Platonic Representation Hypothesis and the idea of *universality* in AI models)
In fact, our brains’ models of the world are so similar that we can narrow down almost any concept by successively refining the questions we ask, a-la Mussolini or Bread. But when the dataset gets too big, and the model can no longer fit all of the data in its parameters, it’s forced to “combine” information from multiple datapoints in order to get the best training loss. Apparently knowing that an image is 0.0001% parakeet and 0.0017% baboon is useful enough to infer not only the true class but lots of irrelevant information like facial structure, pose, and background details.
Or read this on Hacker News