#metabrainz

/

      • _lucifer
        (all except the file created timestamp so on)
      • white_shadow
        ok so its like someone uses the same audio file to generate fingerprint and that audio file has no meta data but our database has meta data linked with that fingerprint then we can return the meta-data to the user?
      • _lucifer
        yes
      • white_shadow
        ok that's a nice feature!
      • it is forcing me to use ndk version 21.2.6472646 while i have 21.3.6528147
      • _lucifer
        you can set the ndkVersion in the build.gradle file
      • `ndkVersion "21.3.6528147"`
      • white_shadow
        worked thanks
      • why can't we give login/ signup in the app itself?
      • Its a basic feature in any application
      • shivam-kapila
        MusicBrainz has its own OAuth
      • _lucifer
        we are using OAuth instead of Http Digest
      • we could try using a webview inside the app instead of opening a browser but i don't think that would make much difference. also not all people like chrome
      • white_shadow
        never heard of it before
      • _lucifer yeah we can do that
      • can we use this
      • ?
      • _lucifer
        white_shadow: already on my list to integrate the app auth with account manager :)
      • but you can work on it if you want. i do not have a timeline to implement this
      • white_shadow
        so we can do login/signup from app itself by using this?
      • _lucifer
        no it is not that simple
      • the user will still have to login using the current workflow
      • it is just that the auth token will be managed by the account manager api instead of our app directly
      • white_shadow
        yeah it seems difficult, but i think API can we written in Auth0 so why can't we have the API for application?
      • _lucifer
        because we already have our own authentication apis
      • which are used by numerous other projects and applications
      • white_shadow
        ok but they are not compatible for android use?
      • _lucifer
        but we are using them so they are compatible
      • the workflow is different from other applications that's the only differene
      • BrainzGit
        [bookbrainz-site] MonkeyDo merged pull request #489 (UserCollection…fix-checkEntityTypeBeforeAdding): fix: throw error when trying to add collection, area, editor to a collection https://github.com/bookbrainz/bookbrainz-site/p...
      • prabal
        Mr_Monkey: I saw your review. I wanted to talk about whether the entities in collection should be in ascending order or descending order
      • white_shadow
        ok
      • prabal
        I changed it to DESC because if it's ASC then user might not be able to see the added entity
      • because of pagination
      • Mr_Monkey
        Yeah, I figured that was the reason.
      • prabal
        should i change it back to ASC?
      • Mr_Monkey
        I'd be more comfortable with showing a success message and refreshing the collection (if it was a success)
      • travis-ci joined the channel
      • travis-ci
        Project bookbrainz-site build #3314: passed in 3 min 29 sec: https://travis-ci.org/bookbrainz/bookbrainz-sit...
      • travis-ci has left the channel
      • prabal
        Hmm okayy
      • Mr_Monkey
        Honestly I'm not sure about our sorting. We'll definitely want to replace it so that users can sort with the column and order of their choosing, but it does require a sizeable refactor.
      • prabal
        How about showing the success message and changing the page number to the last
      • so user can see the added entity
      • Mr_Monkey
        I don't think that's a huge issue if there's a clear success message
      • and as a user, I prefer not to be taken somewhere I didn't request to go
      • Let's shelf that one as "issues that will be solved with proper sortable tables"
      • prabal
        yeah makes sense
      • Mr_Monkey
        Showing a success message however is good either way, and kind of solves our current predicament
      • _lucifer
        white_shadow: if you want a nicer ux with login, the only option i see is http digest. it could be added as a secondary option for users to enable if they want. but then it is essentially a UX v/s Security tradeoff. we would have to ensure that the password is safeguarded and that is difficult. if you can ensure that, it can be added but only as a secondary feature
      • prabal
        Umm showing a message is tricky
      • because we are refreshing the page
      • _lucifer
        white_shadow: also there has not been any request by the users in this regard so i do think it is not only unsafe but unnecessary
      • sumedh has quit
      • Mr_Monkey
        What I suggest is: instead of refreshing the page, call a method passed in the modal props (something like `succesfullyAddedItems`) that will do three things in the collection component: 1. close the modal 2. show success message and 3. call the pagination endpoint to refresh the page
      • It complicates the component a bit, but I think the seamless user experience is worth it
      • white_shadow
        how many active users we have in our app? _lucifer
      • _lucifer
        1000-5000
      • but i think it will rise once we get the tagger and listening working
      • white_shadow
        Customers don't like to get redirected to any other app, and i think the website is not compatible with the mobile view itself
      • prabal
        Yeah alright. and let's keep the entities in ascending order right now. With the most recently added in the last
      • white_shadow
        i had to scroll and zoom in, zoom out
      • v6lur has quit
      • yvanzo
        reosarevok: thanks, made it public!
      • white_shadow
        but yes that all depends how much concerned we are regarding our customers count
      • _lucifer
        not customers but users
      • white_shadow
        yeah
      • _lucifer
        if the users request this feature in decent numbers we can reconsider
      • on how to approach the issue
      • but for now there haven't such requests
      • supersandro2000 has quit
      • supersandro2000 joined the channel
      • white_shadow
        its fine man!
      • samj1912 has quit
      • white_shadow has quit
      • white_shadow joined the channel
      • supersandro2000 has quit
      • supersandro2000 joined the channel
      • nelgin
        2020-08-11 06:54:04,296: Failed to import recording with id in bounds (233327, 249149)
      • 2020-08-11 06:54:04,477: (psycopg2.OperationalError) server closed the connection unexpectedly
      • This probably means the server terminated abnormally
      • before or while processing the request.
      • It always seems to be happening on "recordings"
      • yvanzo This is the entire output https://pastebin.com/wnm1ktrE
      • yvanzo
        "recordings" is the largest index, you can add "--entity-type recording" to rebuild only that index
      • most probably some recently added data makes the indexer to fail, I will check that
      • alastairp
        double-check the log output of postgres too - I had a database import failure a few weeks ago and it was because there wasn't enough memory in docker for mac allocated, postgres was corrupting data and restarting
      • thge "server closed the connection unexpectedly" reminds me of that error
      • yvanzo
        nelgin: for the above, run docker-compose logs -t db
      • white_shadow has quit
      • nelgin
        db_1 | 2020-08-10T14:42:03.341858653Z 2020-08-10 14:42:03.336 UTC [24] LOG: database system was shut down at 2020-08-10 14:41:06 UTC
      • db_1 | 2020-08-10T14:42:03.341871974Z 2020-08-10 14:42:03.341 UTC [1] LOG: database system is ready to accept connections
      • db_1 | 2020-08-10T14:53:10.989199804Z 2020-08-10 14:53:10.989 UTC [42] FATAL: database "musicbrainz_db" does not exist
      • db_1 | 2020-08-10T18:46:42.630713171Z 2020-08-10 18:46:31.822 UTC [1] LOG: server process (PID 5048) was terminated by signal 9: Killed
      • I need to run to a doctors appointment. I'll try allocating more memory. How do I fix the DB in the meantime?
      • bbiab
      • yvanzo
        This one is a false positive: FATAL: database "musicbrainz_db" does not exist
      • (actual an innocuous error triggered by an init script)
      • BrainzGit
        [musicbrainz-server] reosarevok opened pull request #1646 (master…MBS-11031): MBS-11031: Convert historic Edit Track edit to React https://github.com/metabrainz/musicbrainz-serve...
      • BrainzBot
        MBS-11031: Convert historic Edit Track edit to React https://tickets.metabrainz.org/browse/MBS-11031
      • BrainzGit
        [acousticbrainz-server] alastair merged pull request #257 (master…preferences): AB-98: Add a feature to select SVM parameters https://github.com/metabrainz/acousticbrainz-se...
      • BrainzBot
        AB-98: Generate custom project file during model evaluation stage https://tickets.metabrainz.org/browse/AB-98
      • nelgin
        OK. I'm back. I'm going to add 2gb to the VM and see if that'll help it through.
      • alastairp
        nelgin: welcome back
      • if you run `dmesg`, does it show any text about Out of memory? (or OOM killer?)
      • how much memory does the VM have?
      • nelgin
        I had 8gb, now increased to 10. I'm just trying to start up the docker but it looks like it is wanting to install and compile a whole bunch of stuff for some reason s I'll let it finish.
      • alastairp
        and you said that you made some changes to memory settings as well? it sounds like this might be the cause of the crash
      • nelgin
        I increased pgsql from 2gb to 4gb per the suggestion
      • alastairp
        the other suggestion, which might not be listed in the documentation, is that the postgres memory value should be no more than 25% of your total ram
      • so if you set it to 4gb, ideally you should have 16gb memory available
      • nelgin
        OK, that I did not see.
      • There is nothing here about any such 25% limit.
      • alastairp
        it's a best-practice recommendation of postgres - https://wiki.postgresql.org/wiki/Tuning_Your_Po...
      • nelgin
        Something is terribly wrong here. I'm just going to reinstall the vm. Also, any way to have it use a us rather than eu mirror for downloading the files?
      • alastairp
        under the documentation for shared_buffers: "If you have a system with 1GB or more of RAM, a reasonable starting value for shared_buffers is 1/4 of the memory in your system."
      • nelgin
        I have 10 gb that's all I can give it.
      • alastairp
        It sounds like a good idea that we should add a note there about this, or a link to the postgres documentation
      • in that case, you might be able to get away with leaving it at 2gb, although this will make things quite a bit slower
      • yvanzo
        alastairp: I have read that before, ideally yes but otherwise it isn't supposed to affect stability iirc
      • alastairp
        yvanzo: as I said previously, my experience with shared_buffers set to 4gb and only 8gb in docker-for-mac was that postgres would reliably corrupt when building db indexes during data import
      • increasing docker ram made the problem go away
      • yvanzo
      • alastairp: ok thanks, I never tested with this platform
      • nelgin
        OK, reinstalling ubuntu. I'll take a snapshot this time once It has done the updates :)
      • Building the docker image.
      • reosarevok
        yvanzo, bitmap: can you find anything that uses root/edit/details/add_link.tt ?
      • nelgin
        I'm at the "Download latest full data dumps and create the database with..." but. How can I get this to use a US rather than EU mirror?
      • Nobody?
      • bitmap
        reosarevok: looks unused to me
      • alastairp
        nelgin: you should be able to just add the URL of the US mirror to the createdb.sh command
      • ishaanshah
        iliekcomputers: Hi
      • iliekcomputers
        heyo
      • how goes it?
      • ishaanshah
        going well
      • I started writing the code for the backend today
      • iliekcomputers
        awesome!
      • ishaanshah
        I spent the evening thinking about a new way to store the entity stats
      • iliekcomputers
        i don't think we need to plan much for this week, i think the first sitewide graph will take this week entirely.
      • ishaanshah
        Yeah about that
      • I was wondering, instead of storing the entity stats in a json what if we store it in a new table in a relational way
      • it will solves a lot of problems for us as well as allow us to implement new features easily
      • what I was thinking was we create a table with the following schema for top artists suppose
      • iliekcomputers
        i think that is a good idea, and eventually we will probably do that. my instinct is that it isn't in the scope of your current project.
      • we initially went with json because we didn't know what data we would calculate and in what format.