Morning Thoughts: 1/20/26
Tldr; Infra decisions that I need to make sooner than expected, evaluating my decision to shift toward a monorepo, deeper data-platform thinking, continuous re-assessment of productivity tooling, language choices, and where time is actually worth investing given LLM opacity and growing system complexity.
Hitting the Data Infrastructure Decision Wall Earlier Than Expected
Today marked an inflection point I hadn’t fully anticipated: a forced reckoning with data infrastructure decisions that, in theory, were supposed to come later.
It started after I observed the degraded performance of new components and existing feature enhancements. Prisma ORM is great because it strong types, a clean data model, and sane query ergonomics in TypeScript - but it's not a complete data solution on its own. Turns out, I've been using Prisma for more operations that it was designed for, so I need to rip the bandaid off and migrate to a data platform provider OR integrate one on top of Prisma (in case Prisma is still ideal for certain backend server routing tasks). Supabase seems to be the best solution since it appears to offer an integrated solution with the following:
- DB hosting - client-side direct DB access patterns for particular;
- Authentication - thank god, I'm currently using an open source solution, and I'm ripping my hair out from the incompatibility introduced;
- Scalable file storage, realtime, and edge functions - which will allow me to better handle collaborative features and image uploads and processing
The immediate pressure came from my work on a browser extension feature. Unlike a traditional web app, extensions surface edge cases in authentication almost immediately. State persistence, cross-context identity, and security boundaries all behave differently. So a managed authentication platform is critical. In hindsight, I knew I'd need to peel off my open source solution, but I underestimated how important it would be even during my development-experimentation phase.
Parallel Development via Git Trees and Cursor
To maintain velocity, I’ve also been experimenting with running multiple Cursor IDE instances, each checked out to a different Git tree. A Git tree is essentially a snapshot of the repository at a given commit; by working in parallel trees, I can develop non-colliding features simultaneously without constant branch switching.
This has been surprisingly effective, but raises second-order questions. Does this strategy scale? Would it benefit from containerized LLM environments so each IDE window has isolated context, dependencies, and tooling? Or am I adding complexity faster than I’m removing it?
Productivity Tools vs. Cognitive Overhead
I’m also actively debating whether to invest more time in advanced productivity tooling, particularly the more opinionated context-management and artifact-tracking features offered by Anthropic.
There’s a real tradeoff here, I think. On one hand, better tooling promises sharper attention and more effective orchestration of domain-specific tasks. On the other, every additional system introduces new mental load. “If it ain’t broke, don’t fix it". Especially when many of these tools are barely a couple of years old.
My working hypothesis is that the most durable value is either:
- Already baked directly into model infrastructure, or
- So narrowly specialized that adopting it risks shaping model behavior in ways I don’t fully understand.
LLMs still lack granular transparency at the decision making level, and that opacity makes premature optimization particularly worrysome to me.
The Real Problem: Time Allocation Under Uncertainty
Most of this collapses into a single, unsolved meta question that keeps me up at night: what is worth investing time in right now? Learning while building is powerful, but it comes with a constant, unpredictable tax. Every decision competes with the opportunity cost of all the others I didn’t make.
Postmortem: The Monorepo and Turbopack Migration
One clear outcome of this phase has been the migration to a monorepo paired with Next.js Turbopack. A monorepo consolidates multiple services and packages into a single repository, trading isolation for shared abstractions and easier cross-component reuse.
My hypothesis:
- Better modularization
- Easier scaling of features
- Lower friction for coding models to reuse existing components instead of re-deriving them from scratch
So far, this seems directionally correct, though the real test will come as the codebase matures.
Questioning Language Choice
For the first time, well, ever - I’m also questioning my default language choice. I’ve always worked in TypeScript, and it remains a strong fit but at this scale and this early stage, it’s worth re-examining assumptions. Language decisions compound over time, and greenfield flexibility is a wasting asset. No conclusions yet, just pondering.
Humbling Myself with Reality
Finally, I’ve been reading Designing Data-Intensive Applications in parallel with development. The value isn’t in following it prescriptively, but in using it as a source of truth against which to test my intuitions. Comparing trial-and-error observations with established systems thinking accelerates learning. More importantly, it forces me to make meaning out of what I'm doing: not just what works, but why. ESPECIALLY when the nature of the design decision that I have to make are net-new to me.
Sadly, not every day is about shipping more. But it's work that will create compound returns that my future self will thank me for.