#metabrainz

/

      • ruaok
        now I have to upgrade my instance, lol
      • outsidecontext
        Ha
      • Lotheric
        Is there a MB funkwhale yet ?
      • Lotheric wants to join :)
      • supersandro2000 has quit
      • supersandro2000 joined the channel
      • ruaok
        no. but people have created their own instances.
      • zas: dumps finally finished. nearly 24 hours. Thats quite a lot longer than last time. Not sure what changed.
      • zas
        did it work properly?
      • ruaok
        flawlessless from what I can tell.
      • zas
        nothing changed performance-wise
      • ruaok
        not, not for your part. the actual dumping/mogrification was a lot slower than last time.
      • https://github.com/metabrainz/listenbrainz-serv... -- have another look so we can merge, zas?
      • zas
        reviewed
      • ruaok
        thx
      • I guess that wasn't ready to merge, heh.
      • ruaok just wants to be done with it
      • ruaok pushes the fixes.
      • v6lur joined the channel
      • Mr_Monkey
        "Pushafix" sounds like an Astérix et Obélix character.
      • ruaok
        srsly
      • zas: tested the fixes with an incremental dump. all is well. should be good to go now.
      • pristine___: iliekcomputers : We've fixed the dump issues and generated a full new dump. I've triggered an import into spark just to make sure we have fresh data.
      • meh. no more interest to fuck with dump code now. I'll just forget it about it for the time being.
      • iliekcomputers
        Isn't that what we all do, forget about dump code? 🙈
      • zas
        ruaok: I approved the PR, I don't want to be too picky ;) but you have at least to fix PEP8 before merging
      • ruaok
        pretty much. I'm just miffed that it stole a day from me this week.
      • merge what?
      • ruaok has forgotten
      • zas
      • ruaok was joking
      • damn, missed this one... ;)
      • ruaok
        I'll get back to it late next week when I think I might have time next
      • michaelqm joined the channel
      • michaelqm
        I'm having an issue setting up a mirror server on digitalocean. I'm using the docker image, and the import fails when postgres dies at this query:
      • SELECT musicbrainz.recording_alias.recording AS musicbrainz_recording_alias_recording, musicbrainz.recording_alias.type AS musicbrainz_recording_alias_type, musicbrainz.recording_alias.id AS musicbrainz_recording_alias_id, musicbrainz.recording_alias.name AS musicbrainz_recording_alias_name, anon_1.musicbrainz_recording_id
      • AS anon_1_musicbrainz_recording_id FROM (SELECT musicbrainz.recording.id AS musicbrainz_recording_id FROM musicbrainz.recording WHERE musicbrainz.recording.id >= 233449 AND musicbrainz.recording.id < 249336) AS anon_1 JOIN musicbrainz.recording_alias ON anon_1.musicbrainz_recording_id = musicbrainz.recording_alias.recording ORDER BY
      • anon_1.musicbrainz_recording_id
      • Which results in the error:
      • Failed to import recording with id in bounds (149699, 159990)2020-11-20 21:13:32,859: (psycopg2.OperationalError) server closed the connection unexpectedlyThis probably means the server terminated abnormallybefore or while processing the request.
      • ruaok
        how much ram does your VM have?
      • michaelqm
        16GB
      • I've increased postgres and SOLR to use 4GB each (over the 2GB initial value)
      • That should be enough, no?
      • ruaok
        yes. did you modify the postgres config and restart postgres?
      • michaelqm
        Yeah, I created local/compose/memory-settings.yml and then added it to the .env
      • Here's what's in that file:
      • services:
      • db:
      • command: postgres -c "shared_buffers=4GB" -c "shared_preload_libraries=pg_amqp.so"
      • search:
      • environment:
      • - SOLR_HEAP=4g
      • ruaok
        in theory that should be good.
      • have you looked at the logs of the postgres container?
      • michaelqm
        Yeah, the logs show that query as being the one that kills the server.
      • ruaok
        I have no idea what that might be. Seen this before yvanzo ?
      • michaelqm
        I found chatlogs from here from a user 'nelgin' who had the same issue, but other than that I can't find much
      • If i do a `docker stats` while running the recording indexing, I'm seeing system_indexer_1 and system_search_1 both grow to about 3GB of memory each.
      • `system_db_1` stays at a relatively low memory usage.
      • Actually, I'm watching `system_indexer_1` grow quite a bit. It's up to 9GB now.
      • system_search_1 is sticking around 3GB
      • And then it crashes.
      • So it never looks like I max out on memory, ~ 12GB for all docker images.
      • Out of 16GB
      • If it is a memory issue, is there anything I can do short of increasing my RAM?
      • ruaok has quit
      • ruaok joined the channel
      • prabal has quit
      • prabal joined the channel
      • iliekcomputers has quit
      • iliekcomputers joined the channel
      • I think I might have gotten it to work, by doing something counter-intuitive.
      • The 4GB for PG and SOLR that they recommend in the readme, I put that back to the 2GB default.
      • It's still running, but it's about 10x farther than it's ever gotten.
      • v6lur has quit
      • BrainzGit
        [listenbrainz-server] mayhem merged pull request #1178 (master…critical-dump-fixes): Critical LB dump fixes https://github.com/metabrainz/listenbrainz-serv...