Rendered at 15:41:51 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
theandrewbailey 4 days ago [-]
> Since the old version of the game is known on both sides, we compress the new version using the old version as its dictionary.
That's quite clever!
> Since we compress once and decompress many times on player machines, we can afford slow compression times. Zstd lets you tune the compression level, and we found that level 19 yielded about 13% better compression than zip.
Zstd is parallelizable across threads, which wasn't mentioned here. It helps speed it up at high compression levels, though not as much as I'd like.
oddurmagnusson 3 days ago [-]
We do use all available threads for it, did just not call it out in the article
lights0123 4 days ago [-]
Nice! There's also zstd's flush ability that I've used for streaming robotics data. You can write data and flush it over the network for realtime updates, but the compression stream stays open so it can still reference past messages. This means messages get smaller over time so you don't need to share a dictionary ahead of time. I'm not aware of other compression algorithms that have flushing capability like this.
> binary data to connected clients in tiny messages, each saying “field 5 on object X is now 123”
I wonder how Meta's newer, format-understanding OpenZL would do. I imagine its schemas could be auto-generated from protobuf.
oddurmagnusson 3 days ago [-]
Ah, I have not looked into that, zstd keeps giving.
Our updates are not only code, since it's a game, its mixture of game assets( textures, sounds, large json files... ) and code. Zstd is pretty good all around. For pure code updates, I'd probably evaluate code compressions rather than zstd; I know there is an ecosystem of those out there.
That's quite clever!
> Since we compress once and decompress many times on player machines, we can afford slow compression times. Zstd lets you tune the compression level, and we found that level 19 yielded about 13% better compression than zip.
Zstd is parallelizable across threads, which wasn't mentioned here. It helps speed it up at high compression levels, though not as much as I'd like.
> binary data to connected clients in tiny messages, each saying “field 5 on object X is now 123”
I wonder how Meta's newer, format-understanding OpenZL would do. I imagine its schemas could be auto-generated from protobuf.
These are optimized for compiled code though.