Timeline smoothing via FIFO queue
What feature would you like implemented? (Please give us a brief description of what you'd like.)
I'd like to work a bit on how timelines display notes. I think that the current method is kind of inefficient in the sense that it can produce these really jarring floods of posts followed by an emptiness in timeline. I am thinking that what we could do is add posts to a "buffer" (we already kind of do this look at misskey "fan out timeline technology"/FTT) and trickle out the buffer via a rate limiting strategy called the leaky bucket. I think this would help mitigate some of the jarring movements that are prone to happen with instances that take some time processing their inbox queue. I'll describe in the next section why this is an issue for me.
Why should we add this feature? (Please give us a brief description of why your feature is important.)
I find it hideous when my timeline trickles about 20 notes at once during bulk processing of inbox jobs and then i get silence before going into another spurt of rapid fire posts. This is less prevalent now that my workers are decoupled from my frontend (via misskey environmental variables MK_SERVER_ONLY & MK_WORKER_ONLY I believe). It's still an occasional issue though. It makes my timeline feel "dead" until the server is finished catching up on posts, and then I get rapid fire with my timeline scrolling in quick succession as a lot of notes are displayed at once. I'd like this to be an optional feature to enable (if it's not too hard) in case people want to revert back to the prior behavior.
Version (What version of Sharkey is your instance running? You can find this by clicking your instance's logo at the top left and then clicking instance information.)
2024.6.0-transfem
Instance (What instance of Sharkey are you using?)
Contribution Guidelines By submitting this issue, you agree to follow our Contribution Guidelines
- I agree to follow this project's Contribution Guidelines
- I have searched the issue tracker for similar requests, and this is not a duplicate.