got me just as i stepped away from my desk, lol. i’ll have a look in a sec
2025-12-22 35619, 2025
mayhem[m]
no rush. bed time here.
2025-12-22 35625, 2025
julian45[m]
<mayhem[m]> "julian45: http://mapper...." <- formatting looks good - now i just need to make sure it'll actually be handled properly by a collector
2025-12-22 35613, 2025
julian45[m]
so i'll do some quick mocking in docker on my end; meanwhile, zzzzzz time for you
2025-12-22 35640, 2025
mayhem[m]
# TYPE lbmapper_rss gauge
2025-12-22 35644, 2025
julian45[m]
s/quick/testing/, s/mocking//
2025-12-22 35657, 2025
mayhem[m]
* # TYPE lbmapper\_cache_items gauge
2025-12-22 35615, 2025
julian45[m]
* so i'll do some testing in docker (emulate collector and visualizer) on my end; meanwhile, zzzzzz time for you
2025-12-22 35618, 2025
mayhem[m]
what should the gauge be for a value that is randomly fluctuating over time, julian45 ?
2025-12-22 35635, 2025
julian45[m]
i mean, as a measured value at point-in-time, a gauge should be appropriate based on the [possible metric types](https://prometheus.io/docs/concepts/metric_types/#gauge)
2025-12-22 35645, 2025
julian45[m]
* be appropriate type based on
2025-12-22 35632, 2025
julian45[m]
if we want to just make sure it's not getting large at some point in time, i would Think we can handle that math grafana-side?
2025-12-22 35646, 2025
Jade[m] joined the channel
2025-12-22 35646, 2025
Jade[m]
Yeah generally it's up to the query to distinguish between a rate and a value, because a rate is just a derivation of a value
2025-12-22 35631, 2025
julian45[m]
i.e. rates can be derived from exposed values, especially when having the app track rate internally would be computationally expensive
2025-12-22 35640, 2025
Jade[m]
integer overflows are handled by restarting your server 🧌
2025-12-22 35650, 2025
mayhem[m]
those docs are not helping my tired mind.
2025-12-22 35659, 2025
mayhem[m]
> the count of events that have been observed, exposed as <basename>_count
2025-12-22 35629, 2025
julian45[m]
don't worry about histogramming anything
2025-12-22 35630, 2025
mayhem[m]
mayhem[m]: this is what I want, but its not clear exactly how that translates to the output
2025-12-22 35639, 2025
julian45[m]
at least at this point imo
2025-12-22 35651, 2025
mayhem[m]
I'll continue tomorrow to show number of cache items.
2025-12-22 35619, 2025
julian45[m]
yeah - it would be fine to just show # of cache items at observation time as a gauge
2025-12-22 35602, 2025
outsidecontext[m has quit
2025-12-22 35617, 2025
julian45[m]
histograms/summaries are more appropriate for e.g. sorting observations of request response times into buckets. onus for calculating those is usually a client library in a given programming language, and i'm guessing that integrating one into the mapper would be more brain-cycle-expensive, and calculating such metrics would be more computationally-expensive, than would be worthwhile
2025-12-22 35602, 2025
Kladky has quit
2025-12-22 35609, 2025
v6lur has quit
2025-12-22 35634, 2025
julian45[m] uploaded an image: (133KiB) < https://matrix.chatbrainz.org/_matrix/media/v3/download/julian45.net/4RiE5YNoSE8CqHb88Es5H4twEk01nHKB/Screenshot%202025-12-21%20at%208.56.33%E2%80%AFPM.png >
2025-12-22 35656, 2025
julian45[m]
some fiddling later, this stuff is clearly parseable by telegraf, woot
2025-12-22 35657, 2025
julian45[m]
so we're set on basic validation 👍️
2025-12-22 35629, 2025
julian45[m]
* telegraf, woot (just wanted to make sure as a matter of practice + wasn't sure how necessary some bits of some headers were)