thank you both by the way, I'm sure you're super busy so I appreciate the help
<riksucks> "alastairp: I have been hearing a..." <- i think the issue with the current acousticbrainz model is that to sidestep the fact that we cant store copyrighted music the algorithm tries to get necessary data from the music, so we have to have suitable data that can be trained on
but ofc the algo wasnt useful sadly
yes the algo did feature extraction and then used SVM iirc
truly believe that some sort of deep learning model is needed ngl
what if we used the dataset that the dude on the forums created?
[13:19] <monkey> And it might not sound like it, but I'm in favor of dropping the "my" prefix and find a better way to clarify what's personalized for the user
<-- yes pls, no "my-ing" all the time
mayhem: Sounds good! I'll shoot over a message tomorrow 👍
riksucks: Not really. Lyrics are copyrighted content so we can''t store them. There are some relationships to websites where lyrics can be found though.
I see, makes sense
agatzk has quit
agatzk joined the channel
rbsam176 has quit
tandy1000: yeah, you are correct - the current SVM algorithms that we have in the current version of AB work well in some cases, but research has moved on to CNNs and other types of neural networks, and results have shown that the more audio data that you have available, the better they work
however storing enough data to do that is problematic - you're rght. the current data that we have cannot be reconstructed back into audio, however the much more detailed data used in CNNs can be (even though the quality isn't perfect)
the other big issue is getting enough training data to build the model, what we're planning to do in listenbrainz now could be a great tool for this, asking users to contribute values for mood
having the data itself is great, but having a bunch of data validated by people is also good for training the models
as usual, with this kind of thing, building the model is easy - it's gathering the data and working out if it's useful which is the hard part
riksucks: I have a few deep learning models for mood from MTG, I have a notebook that runs all combinations, let me upload it for you
I'm not as familar with the process of training models, but I'll have a look for the code that my coworkers use (we're training on 4 GPUs, this is another problem with deep learning, the hardware resources are much higher than other ML Models)
TOPIC: MetaBrainz Community and Development channel | MusicBrainz non-development: #musicbrainz | BookBrainz: #bookbrainz | Channel is logged; see https://musicbrainz.org/doc/IRC for details | Agenda: Reviews
sorry i'll be afk too
I have a repairman at home, might arrive a bit late, I'll let you know when I'm back
<alastairp> "however storing enough data to..." <- is the reconstructable data allowed to be stored?
We have a couple of people not here today, but other than those, the regulars are up: yvanzo, zas, monkey, akshaaatt, mayhem, bitmap, lucifer, alastairp, Freso – anyone else who wish to give review, let me know ASAP. :)
Hi! I'm trying to have a chill night today so just mailing this in instead :)
Last week I mostly worked on tests (both adding a few missing ones and adding documentation for a bunch of existing ones).
I also updated a bunch of PRs for the next MBS milestone, and released a new MBS version.
Also, I'm finally done helping CatQuest document instrument adding, and I think it looks quite good now:
It would be great if someone else who wasn't used to instrument work could check and see if they see something missing, because both of us know how it works and we might have overlooked something that is not obvious for others :)
This week I expect to mostly write more tests and update / improve more PRs.
Fin! Go CatQuest maybe :)
CatQuest won’t around though, but atj is not either! atj: Go!
Last week I had a meeting with zas who explained the MB infrastructure and its history as well as the challenges that need to be resolved in the short to medium term. I created a syswiki page based on this information for future reference. I also spent some time reading various documentation on the syswiki to try and familiarise myself more with the infrastructure and how things work etc.
zas and I then had another meeting to discuss Ansible and how it could be used to simplify deployment, configuration and maintenance of MB infrastructure going forward. I created a simple playbook to run against a test VM to demonstrate some of the concepts and functionality. zas’ feedback was positive and we plan to use Ansible to deploy a replacement server in the near future as a real world test.
Last week I installed a Jira add-on to help Freso with pruning spam in tickets.
Also merged an utility script to collect container logs, can be useful to: akshaaatt, alastair, bitmap, lucifer, and mayhem mostly, but also to: atj and zas.
Thanks yvanzo !
Completed the deployment of an MBS mirror instance for VolumIO. It helped with debugging a few issues in musicbrainz-docker and sir.
Reviewed many PRs, and updated my PR on improving log's timestamps in MBS.
Plus documented sysadmin (container logs, SolrCloud alert, MB incident log), and tuned GitHub settings for some repositories.
Fin. Go zas!
(Still up: monkey, akshaaatt, mayhem, bitmap, lucifer, alastairp, Freso – anyone else who wish to give review, let me know ASAP. :))
I did upgrades on grafana & discourse (security fixes)
also worked on Picard, PR reviews mainly
Plus usual infrastructure supervision, minor issues handling
As said by atj, we had very constructive meetings, we are thinking about deploying Ansible to help us deploy & maintain servers
I reviewed possible candidates for real-life tests, I'm thinking about cage first
We will order a new server (if ruaok agrees), and set up Ansible to fully deploy initial stuff, up to docker
I think Ansible could help a lot to maintain containers
that's it for me. monkey ?
last week I helped O'Yvanz debug the new remote log collector script
I reviewed a bunch of PRs for ListenBrainz and BookBrainz
Continued helping Shubh work on the server routes and userscript to import entities from other websites
I continued my homework reading up on and thinking about mood classification, and had a good discussion with alastair on that
I spent some time fiddling with Webpack, because that's what JS developers do.
On Friday I helped mayhem hack on a new endpoint to quick make a playable page in ListenBrainz from recording or album MBIDs.
I made a simple userscript that makes MusicBrainz playable that way. It adds a “Play on ListenBrainz" button in the sidebar on MB pages (release, release group, recording and collection)
I've been waiting for this since I learneed about MusicBrainz, so it's really nice to play with it
I made some progress on moving the delete listens stuff to a periodic cron job according to the discussion I had with mayhem.
that led to some scary realisations about postgres transactions.
i also worked on artist/recording similarity and spent much of the week on helping debug that. due to some reason, using user_id instead of uesr_names for this is creating worse results. still not reached to the bottom of that issue.
Reviewed some CB PRs for Ansh and worked with alastairp on a CB release.
also, worked on some BU improvements. and finally worked with mayhem to restart the Yotube Quota verification process.
thats it for me, alastairp next?
yo! tube, the hot new app
I reviewed some of lucifer's PRs on LB, which simplify some of our underlying connection handling and we merged them
I fixed a bug in the checking of out-of-date dumps in LB and added some new functionality.
lucifer finished some work on some old CB PRs and we tested them, merged, and deployed. I spent some time with monkey talking about ideas for capturing genre annotations from users in LB
I also started upgrading some dependencies in LB - python 3.9, and flask 2 to fix a bug with caching of static items in LB during development
akshaaatt: you're up?
I continued my work by revamping MB further.
Also, focused on LB as we are refactoring the frontend codebase. We have successfully updated the LB homepage and some other sections!
I have also been designing some cool real components for us to use in our websites! Let's see how that goes.
Other than that our colleges have reopened and I was busy adjusting back to hostel life and visiting the campus daily!
(Only mayhem, bitmap, and myself are still up. Last call for anyone else who wish to give review!)
That's about it for me. Go mayhem!
bitmap: Go in the meantime.
Did I miss a ping from Rob saying he wouldn’t be here?
last week I mostly did code review and worked more on schema change branches
we plan to decide on tickets for that tomorrow
I also documented deployment stuff related to StaticBrainz in syswiki
fin! go mayhem if here now
I guess I’ll go and if mayhem’s here when I’m done, he can go, otherwise we’ll just skip.
Dealt with some reported editors, looked into client-side image optimisation for the forum after the upgrade, got rid of some spam on Jira, dealt with some flagged forum content, and broke out some discussions into separate topics, etc.
Got kellnerd a MetaBrainz IRC cloak.
I think that’s about it, so, fin!
And no sign-of-life from Rob, so this wraps up the reviews: thank you everyone who contributed!
And no more items on the agenda, so this also wraps up the meeting.
Thank you all for your time! Stay safe out there! :)
they show how as we remove more and more data, it gets more difficult to reconstruct the audio. however, more data is always useful for training, so it's a tradeoff between what can be reconstructed and the results from your model
making things more difficult, techniques for _reconstruction_ are always improving too, someone I know did some studies where he took some work that was thought to be not reconstructable and applied some new deep learning techniques to it, and got pretty "good" results
in specific response to your question - "is reconstructable data able to be stored?", that's a great question, with no absolute clear answer. however in most cases it's probably going to be "no"
5-10 years ago, the data that we have in AB was used for much of the state-of-the-art research, and while it didn't give as good answers as the more recent deep learning models, it worked well in many cases. This data is impossible to reconstruct, so this wasn't as much of a problem
yvanzo: thanks for the ping on the log collection tool - I didn't follow the PR last week, but I'll take a look and see if I can use it
Huh, i'm guessing my status update is still stuck in my outbound mail queue. Sorry for that, everyone!