“But tante, then we will never have Open Source AI”. Exactly. That’s how reality works. If you can’t fulfil the criteria of a category you are not in that category. The fix is not to change the criteria. That’s playing pigeon chess.
This is a bad take. If your criteria aren’t grounded in reality, they aren’t useful, so of course you should change the criteria.
It’s also a missed opportunity to point to an AI model that did things right and that would qualify as “open source AI” even if that definition were not watered down. For example, OLMo (which I just learned about) says that they provide full insight into the training data as well as “full model weights, training code, training logs, training metrics in the form of Weights & Biases logs, and inference code.” Their most complex models are 7B models, which is enough to be relevant.
Saying “Meta and Alphabet will never release Open Source AI that meets the proposed definition” is fine. Saying “we’ll never have Open Source AI, period, that meets the proposed definition” means your proposed definition needs rewritten.
This is a bad take. If your criteria aren’t grounded in reality, they aren’t useful, so of course you should change the criteria.
It’s also a missed opportunity to point to an AI model that did things right and that would qualify as “open source AI” even if that definition were not watered down. For example, OLMo (which I just learned about) says that they provide full insight into the training data as well as “full model weights, training code, training logs, training metrics in the form of Weights & Biases logs, and inference code.” Their most complex models are 7B models, which is enough to be relevant.
Saying “Meta and Alphabet will never release Open Source AI that meets the proposed definition” is fine. Saying “we’ll never have Open Source AI, period, that meets the proposed definition” means your proposed definition needs rewritten.