Rohan Singh's Weblog

Writing about software, data science, and things I learn along the way.

Feed

March 2026
  • Computer inside a transformer

    But what does it take for the LLM itself to be as efficient and reliable as a computer?

    #
  • FreeWispr - local speech to text

    On Wednesday, I hit my weekly limit of dictation with WisprFlow. On Friday eve, I skipped my typical beer run and made an local speech to text, mac native, minimal app. It is a menubar app which does exactly what I want - i.e. talk to my agentic systems. No setup, no API keys, no dashboards to track usage. Get started within a minute.

    Other research goal I want to achieve with this is - how can we improve whisper like small & local models to add correction capabilities which are currently provided by LLMs in WisprFlow like applications.

    You can buy with lifetime upgrades using the link.

    #
  • autoresearch-karpathy

    Is it a new paradigm - autoresearch

    "what is the research org agent code that produces improvements on nanochat the fastest?" this is the new meta. - @karpathy

    #
  • Claude + Obsidian = Love

    I just asked Claude Code: "Did I ever talk about ASR?" - and in seconds it surfaced everything I'd written across two Obsidian vaults: research paper notes from 2022, work metrics docs, ChatGPT conversations, even references buried in Excalidraw diagrams.

    Here's the setup and how you can replicate it.

    What this solves

    You take notes. You have conversations with LLMs. Over time, this becomes thousands of files across multiple vaults. Obsidian search works, but it doesn't synthesize - it gives you a list of files, not an answer.

    The stack

    • Obsidian - two vaults: one for knowledge, one for archived LLM conversations
    • MCP server (bitbonsai/mcp-obsidian) - gives Claude direct file access to your vaults
    • Claude Code* - the CLI that ties it together

    *You can use any tool which supports MCP

    Exact steps

    1. Set up your vaults. Separate vaults for different concerns (I use one for notes, one for conversation exports).
    2. Install the MCP server. Add bitbonsai/mcp-obsidian to your MCP config (~/.config/mcp/mcp_servers.json for Claude Code). Point each instance at a vault path.
    3. Export your LLM conversations. Tools like chatgpt-export can dump your ChatGPT history into markdown files that Obsidian can index.
    4. Ask natural language questions. Claude Code searches across vaults, reads the matching files, and synthesizes a summary - not just links, but context, timelines, and connections between notes.

    Why this matters

    The value isn't in the search. grep can search. The value is in the synthesis: Claude read 14+ files across two vaults and told me that my heaviest ASR focus was on clinical/medical quality - connecting a 2022 research paper to 2024 work metrics I'd forgotten were related.

    Your notes are more useful when something can read all of them at once.

    MCP link

    Cheers

    RS

    #
  • Why Speed Matters (and how Modern AI enables this)

    LLM's will enable progressing at the speed of our thoughts and be optimal.

    ai
    #
  • AI Doesn't Reduce Work It Intensifies It

    Nice read for AI folks.

    #
  • ChatGPT history to obsidian pipeline

    Just finished a full ChatGPT-to-Obsidian import.

    Conversations updated: 1,373 Errors: 0 Indexes updated: 38 Topic notes updated: 4,016 Bases updated: 2

    Checkout full blog here

    #