they are so consistently the same... and have been... that my gut says no, but, maybe they've been able to auto-generate them now. that would save some time.
honest reaction: most AI productivity frameworks seriously undercount the cost side.
I tracked my actual AI spend vs what it directly generated for a full month - $400 in, $355 directly attributable out. the gap between "AI saves time" and "AI creates measurable value" is where most people get fuzzy. turns out the execution gap matters way more than model quality past a certain threshold. the math was eye-opening once you track it honestly.
These were some very useful infographics. Thanks.
Did they use AI tools to generate the infographics?
they are so consistently the same... and have been... that my gut says no, but, maybe they've been able to auto-generate them now. that would save some time.
NotebookLM is definitely one of the best tools I’ve used for my research.
But for me, the real breakthrough in AI use cases comes when you can connect it to all external tools via MCPs.
I never realized how significant the overhead of context switching is.
process-based vs thread-based is one of those things people use daily without ever thinking about what's actually happening under the hood
honest reaction: most AI productivity frameworks seriously undercount the cost side.
I tracked my actual AI spend vs what it directly generated for a full month - $400 in, $355 directly attributable out. the gap between "AI saves time" and "AI creates measurable value" is where most people get fuzzy. turns out the execution gap matters way more than model quality past a certain threshold. the math was eye-opening once you track it honestly.
Wrote up the whole experiment here if you want the actual data: https://thoughts.jock.pl/p/project-money-ai-agent-value-creation-experiment-2026