We can, but apparently my (our) work is sufficiently expressed in the publicly a

We can, but apparently my (our) work is sufficiently expressed in the publicly available data that it’s not necessary. I’m currently experimenting with the difference between training as we do it, and tuning it specifically and that experiment will tell us which way to invest in. Though, if the expansion of memory continues as promised then it won’t be necessary. Instead we’ll move to model Openai o1 which is more orintented to breaking down tasks necessary for the strict construction (constructive logic) we use.

Reply addressees: @bryanbrey


Source date (UTC): 2024-11-19 03:52:33 UTC

Original post: https://twitter.com/i/web/status/1858719990444437504

Replying to: https://twitter.com/i/web/status/1858680117431529810

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *