alastairp: we also need to define service properly for dataset evaluation. currently it's running as your user
there are a ton of `python evaluate.py` processes. I have no idea what they are doing
fqtw_ has quit
ruaok
Freso: 18th
armalcolite: still here?
armalcolite
yeah
Freso
ruaok: Okay. I plan on getting the mattress today or this week.
ariscop joined the channel
armalcolite has quit
armalcolite joined the channel
ruaok
armalcolite: I'm really sorry that I didn't get around to finishing reviewing the current PRs.
kartikgupta0909
Gentlecat: I just pulled the latest code for AB server and a lot of update files are added. Is there a way I can run them one by one in the order of their addition?
armalcolite
ruaok: np.
kartikgupta0909
so we have some kind of function to update the databases?
*do
Gentlecat
no, you have to run them manually
ruaok
I'm headed out for the weekend -- back late sunday, realistically monday.
armalcolite
ruaok: those 500's which you were getting were due to rebase thing.
ruaok
ah, ok.
those threw me and I lost track then.
armalcolite
ruaok: oh. it would be nice if we can devise a rough plan for the weekend
ruaok
that is my goal.
armalcolite
i pushed a few tests yesterday
ruaok
given how close you are to your overall goal, i would like to work on a performance improvement for the postgres setup.
armalcolite
that is good idea, its very important as well.
ruaok
the theory is that we want to keep the most important data in RAM.
so, then the idea is to split the listen table in listen and listen_json.
put the json into the listen table and then time and username into listen table.
so all queryable bits are in the listens table, all the heavy data is in the listens_json table.
with a foreign key between them.
armalcolite
so just move the json column to listen_json table?
ruaok
the acousticbrainz project does this -- so if you have any questions both Gentlecat and alastairp can help you while I am gone.
raw_data -> listen_json
armalcolite
sure.
ruaok
and the other column will be a listen_id foreign key to listen.id
this should allow the fetching of listens to be faster.
does this make sense?
armalcolite
our idea is to reduce the date being fetched in each attempt so as to reduce load.
ruaok
I would suggest that you make a gist with the proposed SQL changes and get alastairp or Gentlecat to give you feedback on them before you commit them to git
armalcolite
yeah, its making sense.
ruaok
yes, pretty much that.
reduce memory footprint is a better way of putting it.
armalcolite
sure, i'll gather their help before making final commits
ruaok
the idea is to keep all of the listen table in memory and then fetch data from listen_json as needed.