#metabrainz

/

      • bitmap
        is that on cage?
      • 2021-02-04 03538, 2021

      • zas
        on pink
      • 2021-02-04 03550, 2021

      • zas
        sir-prod
      • 2021-02-04 03511, 2021

      • zas
        but same issue perhaps: failure to connect to floyd/pg sometimes
      • 2021-02-04 03531, 2021

      • zas
        I think we should plan a floyd/pg reboot tomorrow
      • 2021-02-04 03555, 2021

      • bitmap
        yeah, pink has a lot of 'took more than 60 seconds' in the logs too, for both website and ws containers
      • 2021-02-04 03555, 2021

      • zas
        sir-prod restarted, ok now
      • 2021-02-04 03519, 2021

      • zas
        is pink rebootable?
      • 2021-02-04 03508, 2021

      • bitmap
        yeah, I can stop PG there
      • 2021-02-04 03512, 2021

      • bitmap
        do you want to now?
      • 2021-02-04 03525, 2021

      • zas
        yes, and I'll upgrade docker on it
      • 2021-02-04 03540, 2021

      • bitmap
        ok
      • 2021-02-04 03549, 2021

      • bitmap
        let me remember how to bring the service down in consul
      • 2021-02-04 03540, 2021

      • bitmap
        hrm what was the service ID, it's not pink:postgres-slave:6899
      • 2021-02-04 03515, 2021

      • zas
      • 2021-02-04 03520, 2021

      • zas
        ?
      • 2021-02-04 03557, 2021

      • zas
        to list them all: curl http://10.2.2.41:8500/v1/catalog/services | python -m json.tool
      • 2021-02-04 03530, 2021

      • bitmap
        ok it was pink:postgres-pink:6899
      • 2021-02-04 03540, 2021

      • bitmap
        consul maint -http-addr=http://10.2.2.41:8500 -enable -service=pink:postgres-pink:6899
      • 2021-02-04 03546, 2021

      • bitmap
        and for :5432
      • 2021-02-04 03500, 2021

      • bitmap
        now containers using those should switch to floyd in a min
      • 2021-02-04 03529, 2021

      • zas
        write this down somewhere in syswiki if not already
      • 2021-02-04 03547, 2021

      • bitmap
        ok
      • 2021-02-04 03520, 2021

      • bitmap
        yvanzo: around?
      • 2021-02-04 03516, 2021

      • zas
        I don't think he is, that's late here
      • 2021-02-04 03521, 2021

      • bitmap
        ok. there are some queries for decoda cleanup running but I'm not sure where the cron job is. I'll check hip
      • 2021-02-04 03554, 2021

      • bitmap
        maybe these were just stale psql sessions
      • 2021-02-04 03537, 2021

      • bitmap
        zas: container is stopped now
      • 2021-02-04 03510, 2021

      • zas
        ok upgrading
      • 2021-02-04 03551, 2021

      • zas
        now rebooting
      • 2021-02-04 03538, 2021

      • zas
        pink is back
      • 2021-02-04 03525, 2021

      • zas
        bitmap: you can restart pg on pink
      • 2021-02-04 03530, 2021

      • bitmap
        ok
      • 2021-02-04 03506, 2021

      • bitmap
        there are some issues with the start_postgres_on_container_start.sh script that was added recently
      • 2021-02-04 03524, 2021

      • bitmap
        since it tries to run 'sv up pgbouncer' before runsv is started
      • 2021-02-04 03543, 2021

      • bitmap
        I can comment it out for now, but not sure of a way around that
      • 2021-02-04 03550, 2021

      • zas
        if service is configured and runsv it started it should start the service automatically, so I think sv up pgbouncer isn't needed
      • 2021-02-04 03506, 2021

      • zas
        apart if a down file is there to prevent that....
      • 2021-02-04 03532, 2021

      • zas
        the whole purpose of runsv is to ensure services are up
      • 2021-02-04 03537, 2021

      • bitmap
        yeah, we can probably just remove the down file then
      • 2021-02-04 03546, 2021

      • bitmap
        I assumed runsv was already active at that point