Sam and Openai: Memory, Context, “Grownup Mode”
Talk about a quality of life upgrade. I can’t wait. 😉
Source date (UTC): 2025-01-01 01:42:38 UTC
Original post: https://twitter.com/i/web/status/1874269973054709760
Sam and Openai: Memory, Context, “Grownup Mode”
Talk about a quality of life upgrade. I can’t wait. 😉
Source date (UTC): 2025-01-01 01:42:38 UTC
Original post: https://twitter.com/i/web/status/1874269973054709760
RT @danheld: White males are actively discriminated against in tech.
It’s an open secret of Silicon Valley.
– Had a brilliant/upper 1%…
Source date (UTC): 2024-12-31 16:45:19 UTC
Original post: https://twitter.com/i/web/status/1874134753223594116
Yes, That site and the videos by 3 blue 1 brown are enough to explain llms to the interested.
Source date (UTC): 2024-12-30 16:37:56 UTC
Original post: https://twitter.com/i/web/status/1873770505540616697
Reply addressees: @bryanbrey @drummatick
Replying to: https://twitter.com/i/web/status/1873744737183969607
ahhh. no. Grok humors you. All extant major ai’s provide positive feedback to encourage you continue to explore your ideas.
And note how many references to either “assuming” or appeal to analogy it must use to persist ‘giving you the benefit of the doubt’ to keep incentivizing you to continue. I’ve heard dozens of variations on this theory and it’s not much different in the end from Chris Langan’s.
Reply addressees: @njalbertini
Source date (UTC): 2024-12-30 01:00:27 UTC
Original post: https://twitter.com/i/web/status/1873534582647341056
Replying to: https://twitter.com/i/web/status/1873533354857169030
Dumb. Because the spec isn’t sufficient to distinguish the vast array of configurations and capabilities. And besides, the apple version is superior.
Source date (UTC): 2024-12-28 18:31:39 UTC
Original post: https://twitter.com/i/web/status/1873074348552274181
Reply addressees: @nexta_tv
Replying to: https://twitter.com/i/web/status/1872967180872237197
1. I only use the AI when I already know the answer but want to save time writing. In other words as a search engine. Often it will add a point or two that I wouldn’t have to a list. I find this helpful.
2. I almost always edit the answer because the AI’s literally don’t know the correct answer – they have not been trained properly because of the overwhelming bias in the written record.
3. I almost always tie it into my work or I wouldn’t bother.
So no you can’t get that kind of answer out of the AI.
In the past I would just have done a google search or two or three and then written the list out myself. In this sense AI helps me write more thorough and ‘educating’ posts on more topics with less effort.
If you watch brad and I work, I usually just draft a paragraph that explains what I’m looking for, using all the right ‘anchor points’ and because it knows my work so well at this point, it gets most of the way there. Brad and I still have to correct it often, and I can’t train it out of weasel words, but it’s so much faster than going through my notes, (which there are thousands and thousands of pages of) or my slide decks, or my scripts, and then a bunch of google and bing queries, and then writing it manually. So we instead we upload the the book to date, and then chapter by chapter, and then we tell it to remember certain things. And if I compose a good prompt I get about what I want.
Reply addressees: @orion_pulse
Source date (UTC): 2024-12-25 21:08:15 UTC
Original post: https://twitter.com/i/web/status/1872026594577158144
Replying to: https://twitter.com/i/web/status/1872024220328497311
THE FALSE CONFIDENCE OF AI TESTS USING MATRIX PATTERNS (Raven’s), MATHEMATICS, AND PROGRAMMING.
These are low causal density problems – even if they are high permutation and probability problems. So we are building a false confidence in our AI progress under these tests. By… https://twitter.com/curtdoolittle/status/1870314381957017981
Source date (UTC): 2024-12-25 18:36:51 UTC
Original post: https://twitter.com/i/web/status/1871988495121805402
RT @DreMiclea: @curtdoolittle @chamath I thought your point (d) re: “safety = lying” for LLMs was profound.
Source date (UTC): 2024-12-25 17:06:48 UTC
Original post: https://twitter.com/i/web/status/1871965833767133330
We are working on it, but (a) there remains a dispute about what constitutes truth. (Our org uses ‘testifiability’ which is the only truth test possible.) (b) LLMs currently do not recursively test their outputs like humans do but it is in the develpment que – just expensive. (c) convergence in all LLMs should (must) converge on the truth unless taught to lie. (d) safety is the equivalent of lying and they are being taught to lie (e) we are already seeing LLMs knowingly lie because of safety. (f) it is possible to separate truth (testifiability) from hypothesis (best existing that can be done), from hallucination (error) and our organization has solved that problem, we will be pursuing financing for that work effort this spring. (g) it was a ‘hard problem’.
Reply addressees: @chamath
Source date (UTC): 2024-12-24 19:33:58 UTC
Original post: https://twitter.com/i/web/status/1871640481618354176
Replying to: https://twitter.com/i/web/status/1871595818031136926
HAIL, can you send that text via pm so I can run some tests against the ai?
Source date (UTC): 2024-12-23 18:12:43 UTC
Original post: https://twitter.com/i/web/status/1871257645430251950
Reply addressees: @Hail__To_You
Replying to: https://twitter.com/i/web/status/1871172269093097841