ok so its like someone uses the same audio file to generate fingerprint and that audio file has no meta data but our database has meta data linked with that fingerprint then we can return the meta-data to the user?
_lucifer
yes
white_shadow
ok that's a nice feature!
it is forcing me to use ndk version 21.2.6472646 while i have 21.3.6528147
_lucifer
you can set the ndkVersion in the build.gradle file
`ndkVersion "21.3.6528147"`
white_shadow
worked thanks
why can't we give login/ signup in the app itself?
Its a basic feature in any application
shivam-kapila
MusicBrainz has its own OAuth
_lucifer
we are using OAuth instead of Http Digest
we could try using a webview inside the app instead of opening a browser but i don't think that would make much difference. also not all people like chrome
Honestly I'm not sure about our sorting. We'll definitely want to replace it so that users can sort with the column and order of their choosing, but it does require a sizeable refactor.
prabal
How about showing the success message and changing the page number to the last
so user can see the added entity
Mr_Monkey
I don't think that's a huge issue if there's a clear success message
and as a user, I prefer not to be taken somewhere I didn't request to go
Let's shelf that one as "issues that will be solved with proper sortable tables"
prabal
yeah makes sense
Mr_Monkey
Showing a success message however is good either way, and kind of solves our current predicament
_lucifer
white_shadow: if you want a nicer ux with login, the only option i see is http digest. it could be added as a secondary option for users to enable if they want. but then it is essentially a UX v/s Security tradeoff. we would have to ensure that the password is safeguarded and that is difficult. if you can ensure that, it can be added but only as a secondary feature
prabal
Umm showing a message is tricky
because we are refreshing the page
_lucifer
white_shadow: also there has not been any request by the users in this regard so i do think it is not only unsafe but unnecessary
sumedh has quit
Mr_Monkey
What I suggest is: instead of refreshing the page, call a method passed in the modal props (something like `succesfullyAddedItems`) that will do three things in the collection component: 1. close the modal 2. show success message and 3. call the pagination endpoint to refresh the page
It complicates the component a bit, but I think the seamless user experience is worth it
white_shadow
how many active users we have in our app? _lucifer
_lucifer
1000-5000
but i think it will rise once we get the tagger and listening working
white_shadow
Customers don't like to get redirected to any other app, and i think the website is not compatible with the mobile view itself
prabal
Yeah alright. and let's keep the entities in ascending order right now. With the most recently added in the last
white_shadow
i had to scroll and zoom in, zoom out
v6lur has quit
yvanzo
reosarevok: thanks, made it public!
white_shadow
but yes that all depends how much concerned we are regarding our customers count
_lucifer
not customers but users
white_shadow
yeah
_lucifer
if the users request this feature in decent numbers we can reconsider
on how to approach the issue
but for now there haven't such requests
supersandro2000 has quit
supersandro2000 joined the channel
white_shadow
its fine man!
samj1912 has quit
white_shadow has quit
white_shadow joined the channel
supersandro2000 has quit
supersandro2000 joined the channel
nelgin
2020-08-11 06:54:04,296: Failed to import recording with id in bounds (233327, 249149)
2020-08-11 06:54:04,477: (psycopg2.OperationalError) server closed the connection unexpectedly
This probably means the server terminated abnormally
"recordings" is the largest index, you can add "--entity-type recording" to rebuild only that index
most probably some recently added data makes the indexer to fail, I will check that
alastairp
double-check the log output of postgres too - I had a database import failure a few weeks ago and it was because there wasn't enough memory in docker for mac allocated, postgres was corrupting data and restarting
thge "server closed the connection unexpectedly" reminds me of that error
yvanzo
nelgin: for the above, run docker-compose logs -t db
white_shadow has quit
nelgin
db_1 | 2020-08-10T14:42:03.341858653Z 2020-08-10 14:42:03.336 UTC [24] LOG: database system was shut down at 2020-08-10 14:41:06 UTC
db_1 | 2020-08-10T14:42:03.341871974Z 2020-08-10 14:42:03.341 UTC [1] LOG: database system is ready to accept connections
db_1 | 2020-08-10T14:53:10.989199804Z 2020-08-10 14:53:10.989 UTC [42] FATAL: database "musicbrainz_db" does not exist
db_1 | 2020-08-10T18:46:42.630713171Z 2020-08-10 18:46:31.822 UTC [1] LOG: server process (PID 5048) was terminated by signal 9: Killed
I need to run to a doctors appointment. I'll try allocating more memory. How do I fix the DB in the meantime?
bbiab
yvanzo
This one is a false positive: FATAL: database "musicbrainz_db" does not exist
(actual an innocuous error triggered by an init script)
OK. I'm back. I'm going to add 2gb to the VM and see if that'll help it through.
alastairp
nelgin: welcome back
if you run `dmesg`, does it show any text about Out of memory? (or OOM killer?)
how much memory does the VM have?
nelgin
I had 8gb, now increased to 10. I'm just trying to start up the docker but it looks like it is wanting to install and compile a whole bunch of stuff for some reason s I'll let it finish.
alastairp
and you said that you made some changes to memory settings as well? it sounds like this might be the cause of the crash
nelgin
I increased pgsql from 2gb to 4gb per the suggestion
alastairp
the other suggestion, which might not be listed in the documentation, is that the postgres memory value should be no more than 25% of your total ram
so if you set it to 4gb, ideally you should have 16gb memory available
Something is terribly wrong here. I'm just going to reinstall the vm. Also, any way to have it use a us rather than eu mirror for downloading the files?
alastairp
under the documentation for shared_buffers: "If you have a system with 1GB or more of RAM, a reasonable starting value for shared_buffers is 1/4 of the memory in your system."
nelgin
I have 10 gb that's all I can give it.
alastairp
It sounds like a good idea that we should add a note there about this, or a link to the postgres documentation
in that case, you might be able to get away with leaving it at 2gb, although this will make things quite a bit slower
yvanzo
alastairp: I have read that before, ideally yes but otherwise it isn't supposed to affect stability iirc
alastairp
yvanzo: as I said previously, my experience with shared_buffers set to 4gb and only 8gb in docker-for-mac was that postgres would reliably corrupt when building db indexes during data import