#metabrainz

/

      • thevar1able has quit
      • thevar1able joined the channel
      • Lotheric_ joined the channel
      • SothoTalKer_ joined the channel
      • Lotheric has quit
      • SothoTalKer has quit
      • SothoTalKer_ is now known as SothoTalKer
      • aerozol
        rdswift: The link to the ticket tracker filter here doesn’t work for me (the requested filter doesn’t exist or is private): https://picard-docs.musicbrainz.org/en/about_pi...
      • I can’t find any kind of ‘Picard Docs’ section or component so I’ll just hold off adding a ticket until I know where to put it
      • (to remind myself, it was about adding DBpoweramp to the options here: https://picard-docs.musicbrainz.org/en/workflow... - after checking if it’s actually already in prod)
      • lucifer
        aerozol: spotify probably has abundance of both ;). but yes i think huesound design can improve much. that said i dislike Spotify's current design for the page too.
      • aerozol
        Oh yes I hate it :D
      • Halfway between huesound and their mess would probably be perfect!
      • Maybe a bit closer to huesound...
      • Good morning by the way!
      • Wooo I have a local LB server running! No CSS by the looks of it, but a start
      • BrainzGit
        [musicbrainz-server] 14mwiencek merged pull request #2662 (03beta…mbs-12626): MBS-12626: Fix error submitting multiple instrument relationships https://github.com/metabrainz/musicbrainz-serve...
      • aerozol
        monkey: this is the output after running .test.sh fe -u - should I push the changes or is this unexpected?
      • BrainzGit
        [musicbrainz-server] 14mwiencek merged pull request #2663 (03beta…mbs-12625): MBS-12625: Autocomplete2: Show recent instruments https://github.com/metabrainz/musicbrainz-serve...
      • alastairp
        good morning everyone!
      • lucifer: thanks for fixing CB, I wonder if it continued to work on CI because we had cached layers and we never rebuilt from scratch?
      • ansh: give me a moment and I'll look through your licensing questions, did you clarify an answer with lucifer?
      • hi Pratha-Fish, we can look into this in a bit more detail, but some things that I can see are 1) even if writing is slower for zstd that's not a huge problem, this is a one-off operation and if we can make smaller files then that's an OK tradeoff to have even if it takes a bit longer, 2) we already have proof that zstd is smaller than gzip, given that we've rewritten the files, so we should look into this and see what's going on
      • 3) the "0" time for writing csv+gzip looks really suspicious, and makes me think that something is wrong. what are you measuring here - the read/write time for 1 file, or a set of files?
      • Pratha-Fish: where is the code to generate this graph? I can have a look at it
      • mayhem
        mooooin!
      • BrainzGit
        [musicbrainz-server] 14reosarevok opened pull request #2666 (03beta…MBS-12639): MBS-12639: Also update countries.pot from the DB https://github.com/metabrainz/musicbrainz-serve...
      • reosarevok
        yvanzo: when around, do make sure that seems fine to you since you've looked at pot stuff more than I have :)
      • lucifer
        aerozol: halfway sounds good indeed. re the tests, you'll also need to make the other change monkey mentioned in his comment but other than that the output looks good to push.
      • reosarevok
        outsidecontext: good catch! :D Not sure how we didn't notice for 8 years, but
      • lucifer
        alastairp: yes, it was cache indeed. many of the builds on existing PRs failed when i tried to push a development image from them after updating the master which invalidated the older cache layers.
      • alastairp
        lucifer: right, got it. we can just rebase those
      • lucifer
        yup
      • alastairp
        I think I'll review CB things today - try and get all of that finished up
      • lucifer
        sounds good
      • Sophist-UK joined the channel
      • Pratha-Fish
        alastairp: Hi!
      • Here's the code to generate the graph: https://github.com/Prathamesh-Ghatole/MLHD/blob...
      • alastairp: re:
      • 1) Sounds good :)
      • 2) Maybe it's because I didn't set the compression level in this particular test
      • 3) The "0" time is because the files are already in CSV+GZIP format, so I didn't benchmark the write times for the same.
      • Lastly, the write time here is the total time taken to write 100 files if I remember correctly
      • CatQuest
        bitmap: *please* 🙇 consider making th new "Credited as" button a toggle instead of a button one always has to press to use relationship credits 🙇 thank you
      • alastairp
        Pratha-Fish: I'm just reading this notebook - for the cells where you test write times, it looks like you include the time needed to read the csv.gz too?
      • this would be more accurate if we read all of the data files into memory first, and then start the timer and do the writes
      • Pratha-Fish
        Ooh that really makes a lot of sense!
      • alastairp
        especially because we should keep in mind that linux does lots of smart things if you read the same file from disk many times one after another
      • it caches files in memory a lot of the time - this means that the first time you run it, it will be slower, and the 2nd time it will be much much faster
      • Pratha-Fish
        wow I didn't take that into account
      • alastairp
        so we're actually including a bias against the first test (zst parquet in your example)
      • Pratha-Fish
        Alright, let me run it once again, but this time I'll load the tables into memory
      • That's a nice new addition to the "lessons learnt" section too :)
      • alastairp
        I see that in our test_rec_track_checker we used level 10 zst compression, whereas in test_file_type_io_testing you're just using the default (3)
      • so that might also be a factor in your file size differences
      • Pratha-Fish
        alastairp: indeed, that seems to be the case
      • But the issue is, even with these same settings the tests seemed to prefer zstd earlier
      • alastairp
        btw, great to see title + axis labels + legend in your graphs! very easy to understand!
      • yes, you're right - that's a bit confusing. but the disk cache issue might really be part of the problem
      • we could try and test independently from disk access - you could read/write into a `bytesio` object
      • this is a thing that "looks like" a file handle, but is actually all in memory
      • it means that we would really only be testing the CPU part of compressing the data, rather than any overhead that reading from/writing to the disk might bring
      • atj
        simplest to just copy the files to /dev/shm
      • alastairp
        ah, cool. thanks atj!
      • Pratha-Fish: so, as atj just pointed out, there is a folder on wolf `/dev/shm`, this is a "memory filesystem"
      • it looks just like a filesystem, but is only stored in memory, not on disk. We could copy our base files there, and then do the normal tests - reading/writing etc, but use this as the target location. this will avoid all issues that reading from disk may bring
      • Pratha-Fish: one more thing I thought of - you said "3) The "0" time is because the files are already in CSV+GZIP format, so I didn't benchmark the write times for the same."
      • but is that correct now? because we deleted all of the gz files. is your sample using gz files (which you may have copied to your home directory?) or is it reading the zst files?
      • atj: you know, I've seen /dev/shm in the output of mount(1) so often, and I wondered what it was, but never questioned it and looked into exactly what it was
      • atj
        alastairp: not sure on the origins of it myself, but I assume shm stands for shared memory
      • elomatreb[m]
        You probably just want a regular tmpfs, rather than /dev/shm
      • atj
        elomatreb[m]: it is regular tmpfs?
      • alastairp
        elomatreb[m]: are you aware of specific issues in using /dev/shm randomly as a scratch space?
      • elomatreb[m]
        No, but it would be weird
      • Same reason you don't put your temporary files into /boot
      • atj
        well boot isn't world writable
      • alastairp
        also, my /boot is a separate smaller partition :) (but that's labouring the point)
      • atj
        for the purposes of some small benchmarks, I think we're OK
      • alastairp
        while I agree that setting up a specific tmpfs for this task would be more correct than using /dev/shm, in the grand scheme of things it'd be even easier to just read the files into a bytesio and use that for the tests
      • so we're in sort of a middleground here
      • Pratha-Fish
        alastairp: atj thanks for the tips :)
      • atj
        would be interesting to include different ZSTD levels in the benchmark to see what the CPU/size tradeoff is
      • Pratha-Fish
        alastairp: Nice catch. We don't use the gzip files anymore. Which also means, even the GZIP tests aren't GZIP tests anymore!
      • I think the best option here might be to load all the files at once in a list, and then testing the read/write times independently
      • alastairp
        atj: yeah, I did a bit of that when I was looking at some dumps code that I worked on a few months back
      • anything over about 12 started getting really slow for not much more benefit
      • atj
        diminishing returns
      • alastairp
        it's definitely much faster if you build a compression-specific dictionary, but that's a lot of drama to need to carry around the dictionary for anyone who wants to uncompress the archives
      • atj
        3 vs. 10 or something
      • alastairp
        sure - Pratha-Fish, you already have the default code for compression level 3, and you have the code in the other notebook for compression level 10. You could add that as 2 different columns in the graphs
      • atj
        gzip has levels too, I think the compression does improve quite a lot at higher levels
      • alastairp
        at a speed tradeoff?
      • it's true - we were always testing gz default level against things like bzip2 and xz default levels, but then testing 10 different zstd levels and saying that it was clearly better
      • atj
        IIRC gzip uses a lot more CPU at higher levels
      • alastairp
        but even in my tests - I was seeing for multi-gb files that even zst -10 was faster _and_ gave smaller results than gzip's default compression
      • atj
        to be fair, how old is gzip? :)
      • alastairp
        and much much faster decompressing too
      • absolutely
      • Pratha-Fish
        alastairp: on it
      • alastairp
        so I think we're making the right decision with zst, there's just a question of what we want our speed/size tradeoff to be
      • especially given that the compression is a once-off operation, we can definitely afford to take longer at the compression stage if it gives significantly better size results
      • Pratha-Fish
        not to mention, pyarrow has significantly increased the write speed as well
      • If we also end up making the cleanup process multithreaded, we could also leverage arrow's batch writing functions to further improve the speeds
      • alastairp
        Pratha-Fish: that being said, I was just looking at my calendar and it reminded me that the submission week starts in only 3 weeks time - so given our previous experience of the re-writing taking ~1 week, I think that we should be starting to test making our new version quite soon
      • because if we find a mistake after running it for 4 days, we will have to re-run it
      • so let's work on these graphs once more, but then definitely focus on getting the new dataset created
      • Pratha-Fish
        alastairp: absolutely
      • alastairp
        if you want to write up a bunch of posts about things that you learned, then this also has to be finished before your submission deadline, so I'd think about finishing these within the next 2 weeks so that we have time to polish them if needed
      • Pratha-Fish
        This is also one of the reasons why I was hesitant about restructuring the notebooks and older scripts. It could get in the way of the primary objective, given the time constraints
      • alastairp: Yes. Let's hop on to the cleanup script once this benchmark is done :)
      • alastairp
        ok, agreed. so if you need to re-do an older script because you need it as the base of your post then let's do that
      • otherwise, let's leave it
      • Pratha-Fish
        sounds good 👍
      • P.S. Let's also factor in the fact that I have exams on the following schedule:
      • 11-13 Oct (tentative)
      • 16th Oct (fixed)
      • So there goes another 5 days :skull_and_crossbones:
      • alastairp
        definitely. so let's focus on creating the new dataset as a top priority, and how about you tell me in the next few days what topics you would like to write about so that we can see how many of them we think that we can do. Given your constraints I think that we should think about a maximum of two (in addition to a general blog post about what you did for the project)
      • Pratha-Fish
        alastairp: Great. I'll take a look at the journal, and pick 2 topics that I could expand the most upon.
      • The speed benchmarks sound good for starters
      • lucifer
        lol trying to copy 40 GB from SSD to HDD using 4 threads brought laptop to knees. git taking 10 mins to squash 4 commits.
      • alastairp
        agreed
      • lucifer: we ran out of disk space in a machine at the uni, so I moved the docker root from the / ssd to our large spinning disk
      • and things got _so slow_
      • I can't believe we used to consider these things normal
      • maybe it's just because we saw that we had so many resources that we just started writing really inefficient code
      • lucifer
        oh yeah, docker is way too slow for me to run locally so i mostly use wolf. i am planning to buy a new SSD this week to speed up local development.
      • (the current SSD is small and runs out of space often so i mostly have stuff on the HDD so getting a bigger SSD)
      • atj
        is it NVME?
      • lucifer
        atj: yes
      • mayhem goes back to digesting the large spark query in LB#2037
      • mayhem
        oh, alastairp have you been getting Awair scores of 100 in the past days?
      • alastairp
        mayhem: yeah, 96-100 yesterday morning
      • lucifer
        mayhem: if it helps, that query 1) explodes the array of artist mbids 2) counts the number of times an artist has been listened to by a user. 3) filters the fresh releases based on this artist data 4) finally sorts it.
      • alastairp
        not sure what weights were causing that, I didn't look into detail at co2 or pm25 levels. low temperature and humidity certainly would have helped
      • BrainzGit
        [critiquebrainz] 14amCap1712 opened pull request #469 (03master…sqlalchemy-warnings): Upgrade SQLAlchemy and enable 2.0 warnings https://github.com/metabrainz/critiquebrainz/pu...
      • lucifer
        alastairp: mayhem: are you both available today for some discussion about incremental updates to mb_metadata_cache?
      • mayhem
        i can be
      • atj: zas: I've got a raspberry pi here in the office running the door opener. The load on the RPi is near zero, but network connectivity is rather spotty/shit.
      • for instance, it takes 1-2 minutes to log in, but once logged in, everything is fine.
      • but then sometimes operations are really slow.
      • any ideas what might causes this -- it looks very much like a network setup issue to me.
      • but, the telegram bot that runs there is always responsive and quick. but generally using the RPi is... meh.
      • zas
        weird, which RPi version it is?
      • mayhem
        3 or 4, not sure. I didn't install it -- the one I installed blew over the summer.
      • its at 192.168.1.3 if you care to take a look.
      • atj
        DNS?
      • probably not, usually times out after 30s