does anyone know what a real world size limit of an RMQ message is? iliekcomputers, I am thinking about the data returned from the user similarity algorithm... can we return everything in one giant chunk of data?
_lucifer
the default limit imposed by rabbitmq is 128Mib but not sure if you were asking about this
ruaok
that mostly answers my question... now we know the default limit, what is sensical?
one message will suffice for now, I think. we don't have that many users.
roger_that joined the channel
roger_that
yvanzo: what do you guys have set for your indexer limit? What are the implications of not processing certain messages?
_lucifer: are you following along in ghe planning doc?
I've written more on the second page, have a look when you can.
_lucifer
yes, i am. will do.
ruaok
thx
alastairp: _lucifer iliekcomputers shivam-kapila : I've updated the LB social features doc with what I understand to be the current state of things. Please edit/leave comments for anything that isn't quite right.
iliekcomputers
left a few comments
ruaok
thanks, integrated those already.
it currently doesn't feel like we have enough work to keep all of us busy for an entire week, but I am sure this task will guppy up.
I could take on the troi-bot posting as a stretch goal.
hmm. the feature to recommend a track -- is that part of the timeline or is that separate?
iliekcomputers
i don't think it was, originally.
ruaok
ok, adding a new section.
iliekcomputers
oh btw
_lucifer
alastairp: should we work on lb consul upgrade today?
alastairp
yes, that'd be a great idea
I'm in a meeting for 1-1.5 hours more, but I'm kind of available here from time to time if I'm not paying too much attention to the meeting
let me open the PRs
d4rkie has quit
Nyanko-sensei joined the channel
moufl has quit
Darkloke joined the channel
moufl joined the channel
Darkloke has quit
reosarevok
legoktm: should we be using the Wikimedia REST API for getting the extracts rather than the PHP API?
_lucifer: can you look at each of these sections and double-check that it appears that 1) each config block acts on a single directory in /etc/service/, 2) each block takes files from a single directory in ./docker/services/, and 3) each block adds a `down` file to /etc/service/[service]
_lucifer
yes, sure.
shivam-kapila
Good evening. ruaok: I have added some comments to the planning doc. overall it looks fine
sumedh has quit
_lucifer
alastairp, one nitpick, follow_dispatcher service is now called websockets. we should probably rename that here as well.
alastairp
good catch, please update it
_lucifer
đź‘Ť
alastairp
this has already been updated in docker-server-configs. The container name is now `listenbrainz-websockets-$DEPLOY_ENV`
I made this quick script to put things into consul for quick testing, I wasn't sure if there is a better tool for using, but this is fine for me for now
I installed consul, and in a venv python-consul
I start consul with `consul agent -dev`
and then I put some stuff into it with `python publish_consul.py LB/config.prod.json consul/LB/config.prod.json` (from in the docker-server-configs checkout)
what OS/docker stack are you running?
_lucifer
Ubuntu 20.04
alastairp
ah, because I have `CONSUL_HOST: host.docker.internal` in docker-compose.yml
so that I can run consul on my local machine, however this dns name only works on mac/windows
we might have to run the consul server in docker-compose
it has `runit` installed. this manages the startup scripts that we put in /etc/service
when a docker container starts, it first executes /etc/rc.local (this is why we put stuff here), and then it looks in /etc/service for any directory _without_ a `down` file, and executes its `run` file
this connects to a consul server (think of consul as a key/value store for application configuration, it has some other magic that isn't necessary to know now)
consul-template takes a config file (e.g. https://github.com/metabrainz/listenbrainz-serv...), performs susbstitution on the template, generating our config.py, and then runs the script in the `exec {}` block
so on MeB servers there's a centralised consul server, it loads its config magically from the contents of the consul directory in docker-server-configs
so if we want to run the production server workflow locally, we need to start up a consul server, populate it with relevant config information, and then make the `webserver` container connect to it (with the CONSUL_HOST env variable)
_lucifer
that clears it up. thanks for the clear explanation :D
one question though where does git2consul come into play/
alastairp
git2consul is the magic that reads those files and pushes them into consul
_lucifer
ok, makes sense.
now to run consul server locally, i should add the consulagent and git2consul service to my docker-compose.yml and modify CONSUL_HOST accordingly?
alastairp
I _think so_. I've not done this before sorry
_lucifer
no worries. let me try this.
BrainzGit
[musicbrainz-server] yvanzo opened pull request #1944 (master…mbs-10416-cntrl-chars): MBS-10416: Remove invalid characters from newly entered annotations https://github.com/metabrainz/musicbrainz-serve...