Update for the week ending on Friday, Jan 3, 2025
mtlynch.io
- Started my December retrospective
- Published my notes for The Case for Open Borders
- Integrated my book cover into the self-ad for Refactoring English
Refactoring English
- Published the first chapter: “Rules for Writing Software Tutorials”
- Continued working on passive voice chapter
- Added support for “Discuss on…” links at the bottom of posts
- Got opengraph working for Twitter cards
- Fixed other opengraph properties that I discovered were broken when I started sharing the post on social media
- Had a call with a tech publisher about potentially working together
- My plan is to self-publish the ebook and then after that, see if it makes sense to partner with a publisher
- Started writing chapter on design docs
fusion
fusion is an open-source RSS reader I found when looking for an RSS aggregator to host on my NixOS system. I like that it’s written in Go and uses SQLite as a backend, so it’s pretty easy to self-host. The maintainer is very responsive to PRs as well.
- Simplified password auth logic
- Switched the base Docker image to Alpine Linux
- Made config settings read-only after loading
- Avoided swallowing an error when creating a new session
- Changed reading order to order of publication
- Removed a dead script option
ScreenJournal
ScreenJournal is basically Goodreads, but for TV and movies. Or letterboxd, but focused on small communities.
- Added a users page
PicoShare
PicoShare is a minimalist web-based file sharing tool I’m working on. I’m often frustrated that I can’t just send someone a link directly to a file because every file-sharing service tries to re-encode images/video or wrap their own viewer around other files, so I’m making a simple self-hostable tool that lets you upload files and share them with other people.
- Made a second attempt at switching to a database driver that supports SQLite Blob I/O from Go
- I had tried this in the past but ran into several issues and lost motivation to keep debugging it
- The main insight I had returning to it was that I could use Blob I/O for writing files but keep reading files the inefficient way
- When users run into issues, it seems to be with the write step rather than the read step
- I also realized that my previous write logic was overly complicated
- I implemented an
io.Writer
that would write to the SQLite database in chunks, but I realized I don’t even need to implementio.Writer
because I know the full file size up front and I can just write it to the database in exactly the chunks I want without worrying about buffers that need to be flushed
- I implemented an
- Figured out how to deploy a desktop GUI to a Fly.io server
- I wanted a way to test the new upload functionality using large files and a fast connection, and my home uplink sucks
- Switched to a different API for reading files from the database
- Use a FileSize type
- This prevents file sizes from being 0, which PicoShare doesn’t support
- Make chunk size a
uint64
- Protect against a race condition in
tokenToDB
- Fix some SQLite integrity issues that mattn/sqlite3 for some reason never flagged:
- Don’t depend on system time in file expiration unit tests
- This started failing on 2025-01-01 because it was hardcoded as a future date. How was I to know that it would eventually become a past date?
- Defined a function that I only used in one place as a local variable
- Removed unnecessary string casts
- Make
serve-docker
work even if.env.dev
isn’t present - Log how many rows were affected in a database purge task
- Fixed a Docker warning about mixing case
- Improve documentation on
getChunkSize
- I forgot what it was doing, so I had to capture what I re-learned
Misc
- Got my AirGradient air quality monitor connected to my local network again
- I had shut off the server it was connected to and hadn’t gotten around to reflashing it with the new server address
- Set up email alerts when air quality drops (PM2.5 goes above 20)