Fix BUCKET_POS for ticklags with non-integer reciprocals (#72928)
## About The Pull Request
If the ticklag setting has a non-integer reciprocal, like 0.4, timers
will be inserted into the past because the fractional component gets
rounded down. This is bad.
Change originally made on a Bay codebase but it should work here too.
Probably no real impact on mainline TG servers because the commonly-used
ticklags like 0.2, 0.25, 0.33333, 0.5, etc. have integer reciprocals, so
dividing by them always just multiplies by an integer.
## Why It's Good For The Game
Inserting timers into a bucket in the past (behind the
`practical_offset`) causes a warning/unexpected behavior and should
probably be avoided; the best fix I can think of for it is just rounding
up so that it's placed in the closest *future* bucket.
Co-authored-by: Penelope Haze <out.of.p.haze@proton.me>
Adds a visualizer for lighting object updating. Optimizes the same (#67678)
It occured to me, we didn't have a good way to "see" what turfs were actually being updated
Figured I'd fix that
I've also added some debug vars on SSlighting to make testing with/without some checks easier
Speaking of which, I've added a second check to lighting corner updating
Basically, if our past and current cached rgb values are the same, there's no point updating
This is possible because static lighting is relative. If you've got a
TON of blue, it'll outweight the red and green you have in smaller amounts
We also do some rounding to ensure values look right
Similarly, if you've got roughly the same lighting, and a bit of something you already have a lot of is added, you're not likely to actually enter a new "bracket" of color
Anyway uh, it's hard to profile this, but I've seen it help quite a bit, mostly with things like emergency lighting that updates lighting in small amounts often, and in constricted spaces.
To some extent just comes down to map design
(cherry picked from commit a6d4e180ad)
# Conflicts:
# code/modules/lighting/lighting_object.dm
Co-authored-by: LemonInTheDark <58055496+LemonInTheDark@users.noreply.github.com>
* Fix client timers having invalid <1ds waits (#69356)
About The Pull Request
Timers clamped their waits to >world.tick_lag and rounded it to multiples of the same, but this is invalid for clienttime timers. Clienttime timers have a resolution of one decisecond instead, so we now clamp and round it to that instead. (The stacktrace for negative waits is technically invalid but I didn't care enough to touch it.)
Thanks to LemonInTheDark and MrStonedOne for their help in tracking this issue down.
Why It's Good For The Game
These are effectively zero-wait timers, which can mess up the iteration of the clienttime timer queue by being inserted into the past or current tick's list and causing the head/index to desync, potentially leaving spent timers in the queue or firing them again.
* Fix client timers having invalid <1ds waits
Co-authored-by: Penelope Haze <110272328+out-of-phaze@users.noreply.github.com>
* Makes SSTimer actually recover (#64784)
Makes SSTimer actually recover properly when it needs to. This is a follow-up for #60846 (3da51f515d) with code I added in my port of that PR to bee.
There were 3 main problems, and each was uncovered after fixing the previous:
/datum/controller/master/New() was using faulty logic to find existing subsystems. It was adding Sound Loops twice and not adding Timer at all (Sound Loops being a subtype of Timer).
/datum/timedevent stores a ref to the timer subsystem in var/datum/controller/subsystem/timer/timer_subsystem for performance. It wasn't being updated to the new Timer subsystem, ultimately resulting in it runtiming as an invalid timer.
The buckets need to be reset during recovery. The TTR and other bucket-related handling is out of whack because SSTimer wasn't firing for however long recovery took. Luckily reset_buckets() can already handle all of this.
* Makes SSTimer actually recover
Co-authored-by: ike709 <ike709@users.noreply.github.com>
* Fix: Avoid timer scheduling too far events into short queue (#64242)
Previously it was possible for events to enter the short queue when the timer is offset by more than BUCKET_LEN
Now it is forced to schedule events into the second queue if the timer is processing slower then world time goes allowing the timer to keep up
This PR provides a better definition of TIMER_MAX to avoid scheduling timed events that are more than one window of buckets away in terms of timeToRun into buckets queue and properly passing them into the second queue.
Ports ss220-space/Paradise#578
Should be merged with/after #64138
Detailed explanation
The timer subsystem mainly uses two concepts, buckets, and second queue
Buckets is a fixed-length list of linked lists, where each "bucket" contains timers scheduled to run on the same tick
The second queue is a simple list containing sorted timers that scheduled too far in future
To process buckets, the timer uses two variables named head_offset and practical_offset
head_offset determines the offset of the first bucket in time
while practical_offset determines offset from bucket list beginning
There are two equations responsible for determining where timed event would end up scheduled
TIMER_MAX and BUCKET_POS
TIMER_MAX determines the maximum value of timeToRun for timed event to schedule into buckets and not the second queue
While BUCKET_POS determines where to put timed event relative to current head_offset
Let's look at BUCKET_POS first
BUCKET_POS(timer) = (((round((timer.timeToRun - SStimer.head_offset) / world.tick_lag)+1) % BUCKET_LEN)||BUCKET_LEN)
Let's imagine we have our tick_lag set to 0.5, due to that we will have BUCKET_LEN = (10 / 0.5) * 60 = 1200
And head_offset of 100, that would make any timed event with timeToRun = 100 + 600N to get bucket_pos of 1
Now let's look at the current implementation of TIMER_MAX
TIMER_MAX = (world.time + TICKS2DS(min(BUCKET_LEN-(SStimer.practical_offset-DS2TICKS(world.time - SStimer.head_offset))-1, BUCKET_LEN-1)))
Let's say our world.time = 100 and practical_offset = 1 for now
So TIMER_MAX = 100 + min(1200 - (1 - (100 - 100)/0.5) - 1, 1200 - 1) * 0.5 = 100 + 1198 * 0.5 = 699
As you might see, in that example we're fine and no events can be scheduled in buckets past boundary
But let's now imagine a situation: some high priority subsystem lagged and caused the timer not to fire for a bit
Now our world.time = 200 and practical_offset = 1 still
So now our TIMER_MAX would be calculated as follow
TIMER_MAX = 200 + min(Q, 1199) * 0.5
Where Q = 1200 - 1 - (1 - (200 - 100) / 0.5) = 1200 - 1 - 1 + (200 - 100) / 0.5 = 1398
Which is bigger then 1199, so we will choose 1199 instead
TIMER_MAX = 200 + 599.5 = 799.5
Let's now schedule repetitive timed event with timeToRun = world.time + 500
It will be scheduled into buckets since, 700 < TIMER_MAX
BUCKET_POS will be ((700 - 100) / 0.5 + 1) % 1200 = 1
Let's run the timer subsystem
During the execution of that timer, we will try to reschedule it for the next fire at timeToRun = world.time + 500
Which would end up adding it in the same bucket we are currently processing, locking subsystem in a loop till suspending
On next tick we will try to continue and will reschedule at timeToRun = world.time + 0.5 + 500
Which would end up in bucket 2, constantly blocking the timer from processing normally
Why It's Good For The Game
Increases chances of smooth experience
Changelog
cl Semoro
fix: Avoid timer scheduling too far events into short queue
/cl
* Fix: Avoid timer scheduling too far events into short queue
Co-authored-by: Aziz Chynaliev <azizonkg@gmail.com>
* Fix: timers not removing from second queue on init (#64138)
Fixes#56292
Why It's Good For The Game
Increases chances of smooth experience
Changelog
cl Semoro
fix: timers not removing from second queue on init
/cl
* Fix: timers not removing from second queue on init
Co-authored-by: Aziz Chynaliev <azizonkg@gmail.com>
* Fix: potential bucket corruption in timer reset_buckets (#63427)
Ports Semoro's fix (ss220-space/Paradise#511) related to potential SStimer bucket corruption which caused infinite loop.
The essence of the fix is that earlier timers with a built linkedlist could get into the second queue, which could cause an incorrect state. It works super stupidly, resets the state to the original correct one
BUT THERE IS STILL A BUG IN THE CODE RELATED TO THE INFINITE LOOP!
For some reason the SStimer on our server started to break recently at the beginning of the round. Found that code for waterfall drip effect was causing the issue. Found that setting frequensy to 0 (and calling reset_bucket sometimes) can be used to reproduce the bug. Tried to fix it with this PR
there is an oustanding bug with airlocks causing SStimer to brake sometimes.
cl
fix: fixed potential bucket corruption in timer reset_buckets
/cl
* Fix: potential bucket corruption in timer reset_buckets
Co-authored-by: Aziz Chynaliev <azizonkg@gmail.com>
* The Failsafe can now recover from an deleted MC
Its also more reliable and can handle a situation where its main Loop runtimes and the MC is stuck
* Reset defcon level correctly
Oops left that in from debugging the levels
* Correctly recover SSasset
* Only decrease defcon if MC creation failed
Also add some sort sleep between emergency loops
* Makes the last two emergency actions manual procs
Since they are kinda unstantable its probalby best
if only admins call these manually
Its also more reliable and can handle a situation where its main Loop runtimes and the MC is stuck
You can also now debug Master/New()
While there will most likely never be any situation where the MC is just gone its still good to know that the game can recover from such a situation
For example maybe someone messed up a SDQL query or maybe someone wanted to delete the MC to create a new one hoping the Failsafe would do so for him
Co-authored-by: Gamer025 <33846895+Gamer025@users.noreply.github.com>
* Changes how weather sends sound to players, reduces sound loop overtime (#59284)
* Converts looping sounds from a list of play locations to just the one
* Updates all uses of looping sounds to match the new arg
* Adds an area based sound manager that hooks into looping sounds to drive the actual audio. I'll be using this to redo how weather effects handle sound
* Some structrual stuff to make everything else smoother
Timers now properly return the time left for client based timers
Weather sends global signals when it starts/stops
Looping sounds now use their timerid var for all their sound related timers, not just the main loop
* This is the painful part
Adds an area sound manager component, it handles the logic of moving into new areas potentially creating new
sound loops. We do some extra work to prevent stacking sound loops.
Adds an ash storm listener element that adds a tailored area sound manager to clients on the lavaland z level.
It's removed on logout.
Adds the ash_storm_sounds assoc list, a reference to this is passed into area sound managers, and it's modified
in a manner that doesn't break the reference in ash_storm (This is what I hate)
* Hooks ash storm listener into cliented mobs and possessed objects
* Documents the odd ref stuff, adds an ignore start var to looping sounds, fixes some errors and lint issues
* Applies kyler's review
banging
Co-authored-by: Kylerace <kylerlumpkin1@ gmail.com>
* Cleans up some var names, reduces the amount of looping we do in some areas
* Makes the code compile, redoes the movement listener to be more general
* fuck
* We don't need to detach on del if we're just removing signals on detach
* Should? work
* if(direct) memes
Co-authored-by: Kylerace <kylerlumpkin1@ gmail.com>
* Changes how weather sends sound to players, reduces sound loop overtime
Co-authored-by: LemonInTheDark <58055496+LemonInTheDark@users.noreply.github.com>
Co-authored-by: Kylerace <kylerlumpkin1@ gmail.com>
* Port: fixes of SStimer subsystem from RU SS220 Paradise (#59718)
Unobvious problem spot
#define BUCKET_POS(timer) (((round((timer.timeToRun - SStimer.head_offset) / world.tick_lag)+1) % BUCKET_LEN)||BUCKET_LEN)
With tick_lag equal to 0.1, 0.25, 0.5, rounding of division is normal. But at other values it may be shifted either more or less due to the specifics of floating-point storage and processing. Numbers 0.1, 0.25, 0.5 have blank mantissa, unlike others which lead to numbers such as 245.0000004 when divided.
PS: tick_lag is rounded to the first two decimal places.
Fixes
Rewrote the circular doubly linked list to a regular doubly linked list, because it could cause timers to loop infinitely.
Fixed re-creation of a bucket if the timer does not have a callback.
Added debug stat indicator RST to MC SStimer.
Added optional ability to log dump all timers on crash.
Fixed subsystem logic when a bucket position is misplaced due to division and rounding inaccuracy. The system now captures such rounding errors and restores the correct timer position.
References
[RU] SS220 Paradise port process from TGstation and fixes:
ss220-space/Paradise#5ss220-space/Paradise#10ss220-space/Paradise#26ss220-space/Paradise#32ss220-space/Paradise#37
Contributors
@ semoro: fixes
@ Bizzonium: port
Changelog
cl Semoro and azizonkg
fix: Ported fixes of SStimer subsystem from RU SS220 Paradise
config: Added a new config var: flag/log_timers_on_bucket_reset
/cl
* Port: fixes of SStimer subsystem from RU SS220 Paradise
Co-authored-by: Aziz Chynaliev <azizonkg@gmail.com>
* Makes timer subsystems available as a new subsystem type (#59073)
* Makes timer subsystems available as a new subsystem type
Co-authored-by: Emmett Gaines <ninjanomnom@gmail.com>
* Makes client timers not able to perma block normal timers (#57588)
* Makes client timers not able to perma block normal timers
Co-authored-by: Emmett Gaines <ninjanomnom@gmail.com>
* sstimer no longer batches maintenance tasks to the bucket list to avoid edge cases and duplicated logic. (#55140)
* sstimer no longer delays maintenance tasks if its going over its tick.
This was leading to bugs if certain state operations happened while a task was delayed, furthermore if the timer subsystem was overloaded, the invoked timers list would bloat as it would never get cleared out, which would make all timer invocations take longer as they had to add to an ever growing list.
* Update timer.dm
* Fix error when a bucket has only one timer.
* simply timer loop logic & improve timer debug string
It would try to batch up linked list modifications and every issue we have ever had has been related to this, so now it just directly pulls the head of the linked list off, using bucketEject, rather then detect when it reaches the end of the linked list queue it will now just know because the bucket will be empty.
All bucketCount logic has been moved to bucketEject and bucketJoin(), which should also keep that more proper.
* Update timer.dm
* Update timer.dm
* sstimer no longer batches maintenance tasks to the bucket list to avoid edge cases and duplicated logic.
Co-authored-by: Kyle Spier-Swenson <kyleshome@gmail.com>
* Fix client time timers duplicating if any client time timer caused a stack overflow. (#54977)
* Fix client time timers duplicating if any client time timer caused a stack overflow.
* Fix client time timers duplicating if any client time timer caused a stack overflow.
Co-authored-by: Kyle Spier-Swenson <kyleshome@gmail.com>
* Change signature of BINARY_INSERT to require the full type path, add test (#53217)
BINARY_INSERT used to only take typepaths like/this. Now, it expects them to be /like/this, to be more consistent with ther est of the code.
Adds documentation to COMPTYPE.
Adds a test for BINARY_INSERT.
* Change signature of BINARY_INSERT to require the full type path, add test
Co-authored-by: Jared-Fogle <35135081+Jared-Fogle@users.noreply.github.com>
* Adds Documentation for Entire Timer Subsystem With Minor Refactors (#52415)
* temp
* cleanup
* more
* Revert "more"
This reverts commit 8707bfe75d.
* less is better
* some documentation
* almost there
* we did it
* words
* better words
* Update code/controllers/subsystem/timer.dm
Co-authored-by: Kyle Spier-Swenson <kyleshome@gmail.com>
* Update code/controllers/subsystem/timer.dm
Co-authored-by: Kyle Spier-Swenson <kyleshome@gmail.com>
* whitespace
Co-authored-by: Kyle Spier-Swenson <kyleshome@gmail.com>
* Adds Documentation for Entire Timer Subsystem With Minor Refactors
Co-authored-by: AnturK <AnturK@users.noreply.github.com>
Co-authored-by: Kyle Spier-Swenson <kyleshome@gmail.com>
Makes use of the do while(FALSE) trick to give it its own context and makes it possible for the inserted value to be different from the compared value so #48747 can use it.
Before we only warned if the wait was 1 or higher to solve issues with objects settings timers on themselves while getting qdeleted, a common enough usecase to support (even if it is annoying), but then I remembered we could check for qdestroying directly.
Also fixes it so such `destory()` time timers actually run consistantly. before they would only work if called after the `..()` in `destroy()`
Because sstimer tracks timers internally in the terms of what "byond tick" they are suppose to run at; float wait values are now rounded *up* to the next Byond Tick rather then have byond round the resulting list index *down* at access time.
0 wait timers are now rounded *up* to `world.tick_lag` to avoid incompabilities with editing the current tick's bucket while it was being processed.
* Timer queuing tweaks: binary sorted inserts and rolling buckets.
Client time timers now uses a binary search algorithm for its sorted inserts.
Processing now uses a binary sorted insert, rather then sorting it with sortTim during bucket_shifts.
Buckets now automatically wrap around rather then get regenerated every minute. (Rolling queue)
* Fixes some queue management bugs.
* Fixes a Order of Operations goof up in the ticks<->ds macros.
@ninjanomnom your pain is my success
* Remove debug line
* Fixes some binary insert bugs, fixes client time timers, moved id over to GUID
* Fixes initialization-time timers fucking everything up