the requests it makes work outside the container though
2017-05-15 13532, 2017
ruaok
last time I had to restart the docker daemon to fix this.
2017-05-15 13533, 2017
bitmap
same ip and port
2017-05-15 13502, 2017
zas
bitmap: i'll restart dockert
2017-05-15 13508, 2017
bitmap
ok
2017-05-15 13530, 2017
zas
same
2017-05-15 13504, 2017
zas
let me check firewal rules
2017-05-15 13523, 2017
outsidecontext joined the channel
2017-05-15 13549, 2017
zas
oh, a rule is missing
2017-05-15 13545, 2017
zas
should be fixed
2017-05-15 13549, 2017
zas
bitmap: check
2017-05-15 13556, 2017
outsidecontext has quit
2017-05-15 13504, 2017
zas
i wonder why it was working before
2017-05-15 13513, 2017
bitmap
yep, postgres is up
2017-05-15 13532, 2017
bitmap
"database system was not properly shut down; automatic recovery in progress" <- that's concerning, I wonder if docker stop doesn't do the right thing
2017-05-15 13541, 2017
bitmap
it recovered fine though
2017-05-15 13503, 2017
ruaok
I've seen the docker restart not affect containers and at other times kill them violently.
2017-05-15 13520, 2017
zas
docker stop may kill the thing, if too slow, one needs to set up delays
2017-05-15 13530, 2017
ruaok
next is postgres slave?
2017-05-15 13538, 2017
bitmap
yes, moving on
2017-05-15 13508, 2017
zas
also the container has to handle signals properly ... which isn't always trivial
2017-05-15 13525, 2017
Mineo joined the channel
2017-05-15 13550, 2017
zas
bitmap: you removed the pg container and build it again right ? because the firewall rule wasn't there since the start, so i think the network mode changed, else i don't see how it could have worked til now
2017-05-15 13525, 2017
bitmap
I removed the container and started a new one
2017-05-15 13538, 2017
zas
yes, commands were different
2017-05-15 13556, 2017
bitmap
they were?
2017-05-15 13509, 2017
zas
else i don't see how it could have worked
2017-05-15 13534, 2017
zas
possibly network=host before ?
2017-05-15 13515, 2017
bitmap
slave is back up
2017-05-15 13517, 2017
bitmap
not sure...
2017-05-15 13525, 2017
bitmap
I'll have to check git history
2017-05-15 13554, 2017
bitmap
I think we can move on now
2017-05-15 13523, 2017
bitmap
next one is for zas :)
2017-05-15 13544, 2017
ruaok
> Bring all sites back up except MusicBrainz (prod, beta), the search server, and the CAA:
2017-05-15 13548, 2017
ruaok
go zas
2017-05-15 13558, 2017
zas
i do
2017-05-15 13557, 2017
ruaok
metabrainz up
2017-05-15 13515, 2017
zas
critiquebrainz upstream is down
2017-05-15 13530, 2017
zas
but gateways are ok
2017-05-15 13552, 2017
ruaok
go on with other bits? the start services later on should bring them back up...
2017-05-15 13554, 2017
zas
i checked, all are ok on nginx side, but some upstream are down
2017-05-15 13552, 2017
ruaok
can we just go on?
2017-05-15 13522, 2017
zas
yes, 2 are missing backends
2017-05-15 13527, 2017
zas
the rest is ok
2017-05-15 13500, 2017
ZarkBit has quit
2017-05-15 13524, 2017
zas
upgrades done, i'll reboot when we'll do docker upgrade, nothing critical
2017-05-15 13550, 2017
bitmap
ok, my turn
2017-05-15 13541, 2017
bitmap
containers removed, going to start upgrade.sh
2017-05-15 13557, 2017
ruaok
wooo
2017-05-15 13507, 2017
ruaok
how long do you expect that to run?
2017-05-15 13509, 2017
bitmap
the schema changes should finish fairly instantly, I don't remember how long vaccuuming takes
2017-05-15 13534, 2017
bitmap
not running yet, some missing steps I'm adding
2017-05-15 13539, 2017
ruaok
k
2017-05-15 13522, 2017
yvanzo
ruaok: no fun for me, I have been off for days :(
2017-05-15 13532, 2017
bitmap
ok, fingers crossed
2017-05-15 13517, 2017
ruaok
I hope it is something fun.
2017-05-15 13539, 2017
ruaok
and not being stuck in bed, sick. :(
2017-05-15 13558, 2017
yvanzo
no, visiting ppl who are.
2017-05-15 13527, 2017
bitmap
fixing some shell script error, sigh
2017-05-15 13529, 2017
Freso
yvanzo: :( Family?
2017-05-15 13552, 2017
Freso heads off to https://thesession.org/sessions/1499 now, since he will actually be able to make it there for once