Get the latest tech news
IBM Open-Sources Its Granite AI Models
An anonymous reader quotes a report from ZDNet: IBM managed the open sourcing of Granite code by using pretraining data from publicly available datasets, such as GitHub Code Clean, Starcoder data, public code repositories, and GitHub issues. In short, IBM has gone to great lengths to avoid copyright...
But, as IBM Research chief scientist Ruchir Puri said, "We are transforming the generative AI landscape for software by releasing the highest performing, cost-efficient code LLMs, empowering the open community to innovate without restrictions." The Granite models, as IBM ecosystem general manager Kate Woolley said last year, are not "about trying to be everything to everybody. These decoder-only models, trained on code from 116 programming languages, range from 3 to 34 billion parameters.
Or read this on Slashdot