akshaaatt: One thing I noticed on login page: the "Create an account" link should be more distinguishable in a text context.
2021-11-27 33104, 2021
akshaaatt
Agreed yyoung[m] ! Thanks for the feedback :)
2021-11-27 33105, 2021
Shubh has quit
2021-11-27 33113, 2021
Shubh joined the channel
2021-11-27 33144, 2021
Shubh80 joined the channel
2021-11-27 33100, 2021
Shubh has quit
2021-11-27 33104, 2021
Shubh80 has quit
2021-11-27 33128, 2021
Shubh joined the channel
2021-11-27 33102, 2021
Shubh has quit
2021-11-27 33134, 2021
Shubh joined the channel
2021-11-27 33153, 2021
Shubh has quit
2021-11-27 33152, 2021
Shubh joined the channel
2021-11-27 33154, 2021
Shubh has quit
2021-11-27 33119, 2021
Shubh joined the channel
2021-11-27 33142, 2021
Shubh has quit
2021-11-27 33129, 2021
Shubh81 joined the channel
2021-11-27 33154, 2021
Shubh81 has quit
2021-11-27 33113, 2021
Shubh joined the channel
2021-11-27 33156, 2021
Shubh has quit
2021-11-27 33143, 2021
Shubh joined the channel
2021-11-27 33142, 2021
zas
dumpjson on aretha needs to be adjusted somehow, alerts are back (they were silenced), but the problem wasn't addressed.
2021-11-27 33148, 2021
zas
it takes 10 seconds to login to aretha via ftp/ssh or https when they run, not really great for our own file server.
2021-11-27 33156, 2021
zas
that's my last warning about this btw.
2021-11-27 33112, 2021
ruaok
who are you speaking to, zas? if you need someone to do some, you need to address a person in particular.
2021-11-27 33120, 2021
ruaok
*something
2021-11-27 33144, 2021
zas
I don't know who developed this, I don't even know what they are supposed to do. All what I know is that, 2 months ago, I warned about those processes being a problem, and I was asked to silence alerts, which I did, but the problem is still there, and it doesn't seem anyone will ever fix it. So tell me, what I'm supposed to do?
2021-11-27 33150, 2021
ruaok
You have a number of options that are better than making threats to no one in specific.
2021-11-27 33155, 2021
ruaok
Some of them are:
2021-11-27 33112, 2021
ruaok
1) Enlist me to help with this, ideally a last resort.
2021-11-27 33141, 2021
ruaok
2) Speak to the devs who have worked on this before, when they are around, not when they aren't or are asleep
2021-11-27 33131, 2021
ruaok
3)Offer solutions to the problems: Make a new VM; make room on another machine; ask to see what they are doing to restrict resource use
2021-11-27 33150, 2021
ruaok
4) Bring it up during the meeting so that the whole team knows what is up.
2021-11-27 33110, 2021
ruaok
So, please pick on and do that instead of passive aggressive anger in this channel.
2021-11-27 33115, 2021
ruaok
*one
2021-11-27 33133, 2021
zas
yes, but those are post-coffee options, and it seems I already tried all those in the past
TOPIC: MetaBrainz Community and Development channel | MusicBrainz non-development: #musicbrainz | BookBrainz: #bookbrainz | Channel is logged; see https://musicbrainz.org/doc/IRC for details | Agenda: Reviews, Follow up on Summit Notes -> Tickets, theless/insolite (reosarevok), aretha load (zas)
2021-11-27 33105, 2021
ruaok
yep, another communication without a target. If yvanzo had been your contact person before, why not address yvanzo?
2021-11-27 33133, 2021
zas
I cannot target anyone, I don't know who did this stuff nor who deployed it
2021-11-27 33158, 2021
ruaok
2 lines down from your comment yvanzo explained it to you.
2021-11-27 33125, 2021
ruaok
you know WHAT it is and WHO knows the process. that is plenty to go on.
2021-11-27 33101, 2021
ruaok
and previously I jumped in and offered solutions like getting a dedicated machine do run the dumps on. you never responded.
2021-11-27 33130, 2021
zas
ok, so that's my responsibility. I guess I should take a coffee first. I'll find a solution.
2021-11-27 33124, 2021
ruaok
that is how we work. and worst case, if you can't track who you talked to, then ask me again.
2021-11-27 33145, 2021
ruaok
but really, I don't get it. its a musicbrainz task. that's its name. you know who is on the MB team.
2021-11-27 33157, 2021
ruaok
yes, please coffee and solutions.
2021-11-27 33134, 2021
Sophist-UK joined the channel
2021-11-27 33136, 2021
zas
yvanzo: I don't think https://github.com/metabrainz/docker-server-confi… actually changes anything because the load issue comes from disk I/O saturation not cpu. This machine uses HDDs, with software RAID, so heavy I/O aren't exactly their strength. Additionally those processes write to a docker volume, and to internal /tmp, so I guess move operations are in fact copies
2021-11-27 33115, 2021
zas
so, first I would ensure scripts dump & compress in the same directory and avoid cross-filesystem operations
2021-11-27 33144, 2021
ruaok
one option is to make a VM just for the purpose of making dumps and compressing them. we can open a IP/port on the firewall for PG access. once the dumps on the VM are done, copy back to aretha.
2021-11-27 33102, 2021
ruaok
and sod the VM, if is overloaded, fine. it will just take longer and that is its sole job.
2021-11-27 33120, 2021
zas
second, the use of ionice for all intensive processes in play or use docker io options (docker run --help | egrep 'bps|IO')
2021-11-27 33117, 2021
zas
ruaok: yes, offloading is the last resort, but imho the process isn't efficient, it seems to me it moves files across dirs/fs and this is something to address first, to reduce needed resources. According to what I saw, it writes to /tmp in the container, but files ends on another volume, it means extra I/O for sure.
2021-11-27 33102, 2021
Sophist-UK has quit
2021-11-27 33139, 2021
zas
ruaok: about offloading, we can start a VM on demand for this task. On Aretha, it runs for 20 hours every week, so that's a very good candidate for that
2021-11-27 33119, 2021
CatQuest pets zas
2021-11-27 33126, 2021
reg[m]
Coffee sounds like a good idea
2021-11-27 33132, 2021
Shubh has quit
2021-11-27 33128, 2021
Shubh joined the channel
2021-11-27 33114, 2021
Shubh has quit
2021-11-27 33107, 2021
Shubh joined the channel
2021-11-27 33131, 2021
Shubh has quit
2021-11-27 33134, 2021
Shubh joined the channel
2021-11-27 33147, 2021
Shubh has quit
2021-11-27 33119, 2021
Shubh joined the channel
2021-11-27 33127, 2021
Shubh has quit
2021-11-27 33133, 2021
Shubh joined the channel
2021-11-27 33106, 2021
Shubh has quit
2021-11-27 33123, 2021
Shubh joined the channel
2021-11-27 33142, 2021
Shubh has quit
2021-11-27 33116, 2021
Shubh joined the channel
2021-11-27 33111, 2021
Shubh has quit
2021-11-27 33126, 2021
Shubh joined the channel
2021-11-27 33104, 2021
Shubh has quit
2021-11-27 33111, 2021
yvanzo
zas: Thanks for the pointers. I will check if copies can be made on the same filesystem and try using ionice. Already tried to use some docker io options: https://github.com/metabrainz/docker-server-confi…
2021-11-27 33101, 2021
zas
yvanzo: how long can we allow this dump to run? If reducing I/O load isn't enough we can spawn a VM for it, but I guess it depends on consul too, so we may need a virtual network for the VM to access consul/pg
2021-11-27 33139, 2021
zas
also I noticed it writes to a temporary directory (in /tmp inside the container), is that used before compression? or are files written there then moved?
2021-11-27 33155, 2021
yvanzo
Allowing these dumps to run for 12h on Wed and Sat is more than enough. I think it takes even less than 6h in general, would have to check again.
2021-11-27 33152, 2021
SothoTalKer_ joined the channel
2021-11-27 33123, 2021
SothoTalKer has quit
2021-11-27 33159, 2021
SothoTalKer joined the channel
2021-11-27 33151, 2021
SothoTalKer_ has quit
2021-11-27 33153, 2021
zas
according to graphs, it seems to run longer than that, and I noticed barman & search dumps run during the same time window