Are large language models the final paradigm before AGI? (not sure about the best way to operationalize this question yet.)
Created by Morpheus on 2022-12-16; known on 2070-01-01
- Morpheus estimated 5% on 2022-12-16
- Morpheus said “This would resolve negative if we explicitly train the “final system” towards agi on other modalities as well (like images and video?).If future systems look more surprising in comparison to autoregressors like gpt, this would also resolve negative.” on 2022-12-16
- PseudonymousUser estimated 90% on 2022-12-16
- PseudonymousUser said “If AGI is realized via transformer decoders trained with images and video as inputs but not outputs, I think this should resolve as true. ” on 2022-12-16
- PseudonymousUser said “For example, Deepmind’s generalist agent Gato is essentially an LLM (transformer decoder with discrete output space) that can handle these other input modalities. ” on 2022-12-16
- PseudonymousUser said “If you disagree, I can create another prediction covering this!” on 2022-12-18
- sty.silver estimated 15% on 2022-12-23
- Baeboo estimated 10% on 2022-12-23
- Morpheus said “@Tapetum-Lucidum yeah you would need to create that different question. This question is not about transformers. I created this one after disagreeing with a friend whether language data is enough or if video etc. would actually be necessary. ” on 2023-01-18
- Morpheus said “@Tapetum-Lucidum Though I agree my wording of this question was weird a bit weird and agree with you that’s its essentially “the same paradigm”.” on 2023-01-18
- PseudonymousUser said “Ok, thanks” on 2023-01-18