Source date (UTC): 2025-01-31 01:54:58 UTC
Original post: https://twitter.com/i/web/status/1885144711029285046
Source date (UTC): 2025-01-31 01:54:58 UTC
Original post: https://twitter.com/i/web/status/1885144711029285046
BANKMAN-FRIED PARDON? (TRUMP MUST EYE-ROLL AT THAT ONE)
First: Implausible Deniability: Sam Bankman-Fried obtained the General Securities Representative Examination (Series 7) license on July 15, 2014. He also passed the Limited Representative-Equity Trader Exam (Series 55) on⦠https://twitter.com/WatcherGuru/status/1885046617662718327
Source date (UTC): 2025-01-30 23:31:33 UTC
Original post: https://twitter.com/i/web/status/1885108620293267819
WE WILL HAVE TO PURGE THERAPY AS WELL
https://www.youtube.com/watch?v=ANMmuQEX_Qk
Source date (UTC): 2025-01-30 18:59:51 UTC
Original post: https://twitter.com/i/web/status/1885040245504381413
I asked Grok ..
https://x.com/i/grok/share/HOceFZ9qR0hs7hsudCfRwK8S4
Source date (UTC): 2025-01-30 14:41:08 UTC
Original post: https://twitter.com/i/web/status/1884975136966942966
The Bad Emperor Problem plagues Chinese history, and repeats today. IMO, “He who breeds wins” and both east and west are presently losing.
Source date (UTC): 2025-01-29 18:35:33 UTC
Original post: https://twitter.com/i/web/status/1884671740586189249
Reply addressees: @drmiller1960
Replying to: https://twitter.com/i/web/status/1884670770464313706
(Fascinating – NLI)
Trump is enacting almost every single policy recommendation we put forward other than financial sector reform, marriage reform, and education reform. I mean, we’re going to run out of things to complain about at this rate… π
Source date (UTC): 2025-01-29 01:26:45 UTC
Original post: https://twitter.com/i/web/status/1884412834249122009
(Fascinating – NLI)
Trump is enacting almost every single policy recommendation we put forward other than financial sector reform, marriage reform, and education reform. I mean, we’re going to run out of things to complain about at this rate… π
Source date (UTC): 2025-01-29 01:26:45 UTC
Original post: https://twitter.com/i/web/status/1884412834186158087
Uncertainty and shock and failure to coalesce on a new strategy. The question is whether they’ve lost the moral high ground sufficiently that they need to take an entirely different strategy.
I think so.
Source date (UTC): 2025-01-28 18:30:35 UTC
Original post: https://twitter.com/i/web/status/1884308105644900519
Reply addressees: @Hail__To_You @BlutBodenBased
Replying to: https://twitter.com/i/web/status/1884307091667779734
I should have mentioned the fourth innovation, but (foolishly) didn’t consider it important at the time I posted:
iv) The limit of weights to eight bits reduces the memory capacity without (apparently) affecting the outputs.
So Experts, Update Range, Tokens, Bits, combined with the Reduction (Synthesis) of OpenAI’s data, and in most cases further reduction to other frameworks, compress the work effort of inference.
Source date (UTC): 2025-01-28 18:29:26 UTC
Original post: https://twitter.com/i/web/status/1884307815801761793
Replying to: https://twitter.com/i/web/status/1884059118534811825
IN REPLY TO:
Unknown author
(Doolittle on AI)
RE: Deepseek Nonsense
Ok, been through the code that’s available. It’s not only obvious that the training code isn’t shared, but from what I’ve gathered, they are afraid or ashamed to share it for good reason.
1) There are three innovations in the code that Deepseek used to save compute:
i) A mixture of experts divides the problem vertically into silos.
ii) Limiting the network hierarchy that’s updated (reinforced) divides the problem scope horizontally.
iii) Predicting phrases instead of tokens reduces the network numerically ( and I suspect produces more semantic value per byte so to speak)
2) They used existing code from Meta’s Open Source LLM, and slightly modified it.
3) I am not positive, but given the code thinks it’s OpenAi, the absence of the training code, and the similarity of the results, it appears that they either got a copy of the OpenAI weights, OR they traversed the OpenAI graph using multiple accounts instead of ‘training’ Deepseek from source data.
3) In other words, yes there are innovations but they are micro innovations on existing work and another example of intellectual property theft.
Ergo: DeepSeek is little more than a raid on OpenAI’s intellectual property under the pretense of replication of the work effort, which does in the end result in the conversion of OpenAI’s private intellectual property to an Open Source that can be used by others IF we can recreate the training code so that we can tweak the model and add new verticals (Experts) to it.
I don’t have time for this kind of skullduggery but our company’s future depends upon access to an AI we can train by adding an expert to it, a chain of thought for that expert and an API consisting of a set of prompts to feed that chain of thought.
Cheers
CD
Original post: https://x.com/i/web/status/1884059118534811825
(Doolittle on AI)
RE: Turns out it’s not complicated. So there is no moat.
I should probably write something about why there is no moat for AI, because it’s taking advantage of the intrinsic data structure of sentences (language). And that once the LLM strategy of brute forcing an n-dimensional manifold representation was invented, the simplicity of the architecture for AI was exposed, and that its logic is just a commodity.
Once you solve the interface problem – which LLMs do very well by using ordinary language as a programming language – the entire stack of human cognitive faculties is simply a matter of recursive filtering with recursive wayfinding.
I’ve been writing about the simplicity of consciousness for years now, and it’s going to emerge as rather obvious even for those nonsense-philosophers who lack knowledge of neuroscience and cognitive science such that they are trapped in the fantastic just as their supernatural and theological peers remain.
(Repost)
Curt
Doolittle
Source date (UTC): 2025-01-28 03:05:05 UTC
Original post: https://twitter.com/i/web/status/1884075194861645824