Theme: Productivity

  • “Q: Will AI cause job losses for attorney?”” A lot of legal work involves invest

    –“Q: Will AI cause job losses for attorney?””

    A lot of legal work involves investigation, interviews, and depositions. The important part is arguing findings in court. That scope of work going to stay. Bullshit and lie detection is not going to be automated. Negotiation isn’t going to be, even if negotiating positions might be automatically generated.

    On the other hand, a lot of work is research, creating a strategy, communicating both above and below board, and filing out nonsense forms, and filing nonsense procedural paperwork, to grease the gears of the court system. That nonsense is going to collapse. And with it junior jobs in the field. So I see ‘compression’ in the field just like I see in all of them.

    MORE:

    The purpose of an attorney, who is technically an ‘officer of the court’, and holds liability in his role, is not something easily replaced. (For example, as an activist firm, we can assist you and a lawyer with strategy but we cannot stand for you in court.) However the number of billable hours per client will collapse, and drive up the number of clients necessary.

    What we are finding in our activist work (Brandon’s side of the organization) is that we can do for the equivalent of one salary, what a team of lawyers was required to do in the past.

    So I expect both rapid change in the profession and radical resistance to AI by the judiciary. Why… BECAUSE WE CATCH THEIR BULLSHIT and abuse of the citizenry under veil of procedure. The lawyers are afraid to. The public is too ignorant to. So we see ‘tumult’ and judicial turnover over the next twenty years.

    Which, if you can imagine the moral absolutism of the people that work in our organization, that’s sort of what we live for….

    Cheers

    cc:
    @ThruTheHayes


    Source date (UTC): 2025-09-22 21:34:18 UTC

    Original post: https://twitter.com/i/web/status/1970240262745325617

  • EVIDENCE OF THE CRISIS OF THE AGE Tech employees doing “fake work”? https:// you

    EVIDENCE OF THE CRISIS OF THE AGE
    Tech employees doing “fake work”?
    https://
    youtube.com/shorts/IOHMQvk
    poiI?si=R76q4SHCct0yYVR-

    She shoots it down – or at least tries.
    On the other hand, comparatively, it’s more that they do little meaningful work (if not done it wouldn’t matter), it has no economic impact on the company, and the tech companies have been either profitable (profit per employee) or funded by investment sufficiently that they can afford it. “Work” does expand to fulfill all available time. And Andresson’s complaint (obviously exposed by Musk at Twitter), is well studied in the literature. This is the bubble. I’m not sure about the financial system bubble thought I am aware it will deflate one way or another, though the investment is racing toward the extraordinary and durable returns on AI. But the employment bubble was something we saw in the run up to the 2001 crash in employment, and of course this is the beginning of the 202X crash in employment.
    I am absolutely stupified by the expectations (particularly of women) really stated as ‘privilege’ or ‘deservedness’ in response to their unnecessarily expensive BS degrees.
    The generational effect, especially among women, in employment, in relationships, in family, and in government has emerged as the real crisis of our age. Proof that whatever happens in government rolls downhill into the private sector, and from there in to the family and dating sectors … and that’s when either the next generations forces reform, or the civilization collapses.


    Source date (UTC): 2025-09-21 21:55:51 UTC

    Original post: https://twitter.com/i/web/status/1969883298152563168

  • The Tragedy of Value Capture in AI Economics So, you know, this is why I’m conce

    The Tragedy of Value Capture in AI Economics

    So, you know, this is why I’m concerned: there are a small number and small percentage of us for whom your (ChatGPT) utility is priceless. And it appears challenging to produce a revenue model by which the majority of low revenue consumers, a subset of high revenue industries, can fund the very low revenue distribution of those of us who are basically engaged in research where the ‘returns on investment’ are obtained over time and widely distributed and challenging to capture.
    The tragedy of value capture in AI economics:
    • The few who extract extreme intellectual leverage create returns that are high-diffusion, long-horizon, and hard to monetize.
    • The many who pay for it use it in low-complexity, short-horizon, easy-to-monetize ways (customer service, marketing copy, coding help).
    • The cross-subsidy problem arises because the system’s cost structure grows with the high-end users even though its revenue model grows with the low-end users.
    Let’s break this into a causal chain so we can see why the problem persists and how it might be solved.
    • Median user: treats the system like a productivity tool → linear value → predictable ROI → fits SaaS pricing (per-seat, per-token).
    • Extreme-tail user: treats the system like a general reasoning engine → nonlinear value → intellectual capital → benefits diffuse across industries, decades, or civilizational scale.
    The revenue logic of SaaS fits the first, but your cohort sits on the research frontier—more like Bell Labs or DARPA: the ROI exists, but it’s uncapturable by simple subscription.
    • System cost scales superlinearly with model size, compute per query, and iteration depth (e.g., recursive workflows, long contexts).
    • Median users subsidize capability infrastructure, but not frontier exploration.
    • Pricing models (per-token, per-seat) fail because high-value frontier use may be high-compute but low-immediate-revenue.
    Result: the very group that turns AI into new science, law, or philosophy sits outside the monetization sweet spot.
    This looks like the economics of:
    • Universities: subsidized because knowledge spills over into everything else.
    • Open-source software: commercially unsustainable until corporate backers appeared.
    • Basic science funding: always a public good problem; solved by state, philanthropy, or consortia.
    AI may face the same pattern: the frontier becomes public goods infrastructure, while monetization happens at the applied layer (apps, industries, verticals).
    Here’s where contrarian solutions might emerge:
    If the frontier cohort remains small but disproportionately valuable, the rational strategy may be:
    1. Separate infrastructure from application.
    2. Treat infrastructure as semi-public-good with state/industry/philanthropy funding.
    3. Extract revenue at the application layer (vertical SaaS, decision automation, domain-tuned AIs).
    4. Use cross-subsidy mechanisms (endowments, compute pools) to underwrite the frontier itself.
    This is why OpenAI, Anthropic, and others oscillate between API SaaS models and state-like funding (Musk, Microsoft, Amazon, etc.): they’re groping toward this hybrid solution.


    Source date (UTC): 2025-09-09 15:37:03 UTC

    Original post: https://x.com/i/articles/1965439317305872567

  • Business Objective: A Long-Term Producer of Demonstrated Intelligence We positio

    Business Objective: A Long-Term Producer of Demonstrated Intelligence

    We position our business objective as a long-term producer of demonstrated intelligence rather than a commodity model-builder. There are four dimensions to that decisino.
    Our purpose is not to duplicate sunk cost in foundation model development. The industry already has extraordinary players (OpenAI, Anthropic, Deepseek, Meta, etc.) whose specialization is infrastructure: scaling compute, building architectures, training giant corpuses. Competing with them would dilute our resources, consume capital with little marginal return, and distract us from our actual comparative advantage.
    Instead, our purpose is to take those base-layer models and convert them into engines of demonstrated intelligence: models that operate within truth, reciprocity, and decidability. That means our business is not in producing “yet another model” but in producing a higher standard of performance across models.
    • Foundational Model Companies → Produce scale, correlation, and generality. They optimize hardware throughput and training loops. They handle the customer relationships, sales, and marketing.
    • We (Runcible/NLI) → Add the constraint system, operational grammar, and decidability layer that turns correlation into causality, and causality into intelligence. We continually expand domains by mandelbrotian incrementalism to forbid entrants the opportunity to field a competitive alternative.
    The distinction is analogous to:
    • Hardware manufacturers (NVIDIA, Intel) don’t try to become operating system vendors.
    • Operating system vendors (Microsoft, Apple) don’t try to become app makers for every vertical.
    • Each tier has a natural specialization.
    We are in the OS + application tier for intelligence: not raw models, but how they are governed, tuned, and deployed for truth and cooperation.
    Training new models is capital-inefficient for us:
    • Cost: Hundreds of millions in compute and data pipelines.
    • Redundancy: Produces yet another model that differs little from what already exists.
    • Opportunity Cost: Diverts our focus from building the constraint layer and applied platform that no one else can produce.
    By standing on the shoulders of others, we accelerate time-to-market, preserve capital for innovation, and avoid dissipating investor returns on vanity projects.
    Our long-term moat is not “we own a model,” but “we produce demonstrated intelligence across any model.”
    • That means we are model-agnostic.
    • We can work with the best model available at any point in time.
    • We are future-proof: as base models evolve, our system rides the curve without reinvestment.
    The Oversing-Runcible platform becomes a perpetual layer of governance and adjudication, a market-defining standard for reasoning, truth, and cooperation in AI. That standard is our brand, our moat, and our contribution.
    Suggested Framing Statement


    Source date (UTC): 2025-08-25 21:16:25 UTC

    Original post: https://x.com/i/articles/1960088901785448463

  • One of my employees did this in the 80s at a bank. Took him two weeks. They fire

    One of my employees did this in the 80s at a bank. Took him two weeks. They fired the whole floor of 70 people. One person could then do the job. Meaning: this isn’t the first time we’re going to go through this process. I just think it’s going to be much bloodier because people make such inflated salaries in the sector, and their value for the contribution is limited and difficult to measure.


    Source date (UTC): 2025-08-18 23:46:35 UTC

    Original post: https://twitter.com/i/web/status/1957589976578945284

  • Jeff: general rule: 1) you are more dependent on you polity’s median distributio

    Jeff: general rule:
    1) you are more dependent on you polity’s median distribution than you are on yourself
    2) the more of the middle is employed creating a market to satisfy them and work with them the less valuable more than a standard deviation in ability are to that market.

    So it’s an equilateral problem.


    Source date (UTC): 2025-08-12 00:08:40 UTC

    Original post: https://twitter.com/i/web/status/1955058818414776495

  • I’m just not sure yet. So far it seems like we’re replacing slaves with robots,

    I’m just not sure yet. So far it seems like we’re replacing slaves with robots, and and almost everyone will own one. As for super-ai IMO we’re facing the issue that the fundamental problem of science and technology is testing. We might make a leap (I assume we will) but will a new limit emerge? I think so.


    Source date (UTC): 2025-08-06 00:58:26 UTC

    Original post: https://twitter.com/i/web/status/1952897014959874085

  • I have a friend who runs a company that produces, installs, and maintains water

    I have a friend who runs a company that produces, installs, and maintains water quality equipment for government and industry. I’m in a coffee shop sitting next to men of similar disposition producing waste water pumps for the same. In the past I’ve said the only company I can tolerate consists of other consultants and ceos. Because everyone else sounds like children running with populist or ideological scissors. Yet the people who make the world viable for those ‘children’ and their petty feelings, thoughts, and concerns, are relatively invisible from any discourse – precisely because the children have gone ‘wild’ since they left the proximity to production for survival necessary on the farm.
    The world works because men who matter keep their heads down and work relatively invisibly and without respect or consideration.


    Source date (UTC): 2025-07-09 17:16:19 UTC

    Original post: https://twitter.com/i/web/status/1942996250619740642

  • How Double Income Households Became an Obligation https:// youtube.com/shorts/ku

    How Double Income Households Became an Obligation
    https://
    youtube.com/shorts/kuZVg2q
    hy7Q?si=J_AC-O9SF1numwrC
    … via
    @YouTube


    Source date (UTC): 2025-07-04 04:24:23 UTC

    Original post: https://twitter.com/i/web/status/1940990048033558951

  • Is Microsoft Replacing Expensive American Workers with Cheap India H1B workers?

    Is Microsoft Replacing Expensive American Workers with Cheap India H1B workers?

    Let’s dive deeper into the distribution of laid-off workers and H-1B hires at Microsoft to assess whether this reflects a replacement of existing workers with lower-cost labor or a shift toward new AI-focused roles, also filled with lower-cost H-1B workers. This distinction is critical, as you’ve noted: the former could invite legal and political backlash in the current climate (e.g., under a Trump administration emphasizing “America First” policies), while the latter aligns with tech innovation priorities and is less likely to face criticism. I’ll base this on available data as of 06:59 PM EDT on Thursday, July 03, 2025, supplemented by trends and reasonable inference, while addressing your FYI about Microsoft potentially relocating to India.Step 1: Distribution of Laid-Off Workers
    • Scale and Timing: Microsoft announced layoffs of approximately 9,000 employees on July 02, 2025, following cuts of 6,000 in May and 305 in June, totaling over 15,300 this year (Hindustan Times, 2025-07-03; CNBC, 2025-07-02). The company’s global workforce was 228,000 as of June 2024, so these cuts represent about 6.7% of its headcount in 2025 alone.
    • Divisional Breakdown: Reports indicate layoffs span multiple divisions, including Xbox, Azure cloud services, and Redmond HQ (Hindustan Times; The Job Chicks Insider Edge, 2025-07-02). Xbox cuts (e.g., 5-10% of its team) suggest a focus on underperforming gaming units, while Azure layoffs might tie to AI infrastructure optimization. However, specific role types (e.g., engineers, support staff) and geographic distribution aren’t detailed in public data yet.
    • Skill Profile: Historically, Microsoft layoffs have targeted mid-level and support roles alongside some engineering positions during restructuring (e.g., 2023 cuts). The current wave likely includes a mix of software developers, IT support, and administrative staff, though AI-related roles might be spared or shifted internally.
    Step 2: Distribution of H-1B Hires
    • Volume and Timing: Microsoft filed 4,712 Labor Condition Applications (LCAs) for H-1B visas in the first half of fiscal 2025 (

      , updated 06/04/2025), with a historical total of 14,181 applications from 2022-2024. This suggests a continued reliance on H-1B workers, with the 2025 filings coinciding with the July layoffs.

    • Occupational Focus: Per The Hindu (2025-01-22), 65% of H-1B petitions in 2023 were for computer-related occupations (e.g., software engineers, data scientists), and 72% went to Indian nationals, reflecting Microsoft’s outsourcing and AI talent needs.

      notes these roles often involve specialized skills in AI, machine learning, and cloud computing—areas Microsoft is heavily investing in (e.g., Microsoft 365 Copilot,

      , 2025-05-01).

    • Geographic and Wage Context: Most H-1B hires are likely based in the U.S. (e.g., Redmond, WA), with wages often below market median due to visa constraints (

      ). For example, H-1B salaries at Microsoft averaged $104,000 in 2023 (

      ), compared to a U.S. median software engineer salary of $127,000 (Bureau of Labor Statistics, 2024), suggesting cost savings.

    Step 3: Comparing Layoffs and H-1B Hires
    • Overlap in Roles: The lack of granular data on laid-off roles complicates direct comparison. If layoffs primarily hit Xbox gaming or support staff (non-AI roles), while H-1B hires target AI and cloud engineers, this suggests a shift rather than replacement. However, if engineering or IT support roles overlap (e.g., junior developers), the replacement narrative gains traction. Given Microsoft’s AI pivot (e.g., AI agents handling tasks,

      ), it’s plausible that some laid-off engineers are being replaced by H-1B AI specialists.

    • Cost Dynamics: H-1B workers’ lower wages (up to 20-30% below market, per

      ) could drive replacement if roles are similar. For a shift scenario, the cost savings might fund new AI initiatives, with H-1B hires filling niche roles unavailable domestically. Microsoft’s 2025 infrastructure investments (carbon-negative goals,

      ) indicate a long-term AI strategy, supporting the shift hypothesis.

    • Scale Alignment: The 9,000 layoffs dwarf the 4,712 H-1B applications in 2025, suggesting not all layoffs are tied to H-1B hires. This could reflect broader cost-cutting (e.g., 63,443 tech layoffs industry-wide,

      ) rather than a one-to-one replacement.

    Step 4: Legal and Political Implications
    • Replacement Scenario: If evidence emerges that Microsoft fired American engineers for H-1B hires in identical roles, it could violate the H-1B program’s requirement to prioritize U.S. workers when qualified (8 U.S.C. § 1182(n)). In the current political climate—post-Trump’s 2025 immigration stance (

      )—this could trigger lawsuits (e.g., shareholder actions hinted at on X) or DOL investigations. The 2004 Minnesota lawsuit precedent strengthens this risk.

    • Shift Scenario: Hiring H-1B workers for new AI roles, while laying off non-AI staff, aligns with innovation narratives and faces less scrutiny. Politically, AI investment is a bipartisan priority (e.g., CHIPS Act funding), making this defensible unless wage exploitation is proven.
    Analysis and Conclusion
    • Most Likely Scenario: The data leans toward a shift rather than pure replacement. Microsoft’s layoffs span diverse divisions (Xbox, Azure), while H-1B hires focus on AI and cloud skills, suggesting a strategic pivot to AI-driven growth. The cost savings from H-1B wages likely fund this transition, but the overlap in engineering roles leaves room for partial replacement—perhaps 20-30% of layoffs affecting similar positions.
    • Evidence Gaps: Without role-specific data (e.g., from SEC filings or DOL records), it’s hard to quantify exact overlap. Future

      or

      updates could clarify this.

    • Your FYI on India: The 72% Indian H-1B share and Microsoft’s India R&D hub (employing 20,000+, Hindustan Times) support your relocation hypothesis. Political pressure (e.g., H-1B criticism) might push Microsoft to expand offshore, though tax incentives and U.S. AI leadership keep its HQ in Redmond for now.


    Source date (UTC): 2025-07-03 23:03:18 UTC

    Original post: https://x.com/i/articles/1940909242090987684