alastairp: we also need to define service properly for dataset evaluation. currently it's running as your user
2016-07-06 18852, 2016
Gentlecat
there are a ton of `python evaluate.py` processes. I have no idea what they are doing
2016-07-06 18804, 2016
fqtw_ has quit
2016-07-06 18804, 2016
ruaok
Freso: 18th
2016-07-06 18816, 2016
ruaok
armalcolite: still here?
2016-07-06 18837, 2016
armalcolite
yeah
2016-07-06 18852, 2016
Freso
ruaok: Okay. I plan on getting the mattress today or this week.
2016-07-06 18809, 2016
ariscop joined the channel
2016-07-06 18817, 2016
armalcolite has quit
2016-07-06 18835, 2016
armalcolite joined the channel
2016-07-06 18822, 2016
ruaok
armalcolite: I'm really sorry that I didn't get around to finishing reviewing the current PRs.
2016-07-06 18848, 2016
kartikgupta0909
Gentlecat: I just pulled the latest code for AB server and a lot of update files are added. Is there a way I can run them one by one in the order of their addition?
2016-07-06 18854, 2016
armalcolite
ruaok: np.
2016-07-06 18859, 2016
kartikgupta0909
so we have some kind of function to update the databases?
2016-07-06 18801, 2016
kartikgupta0909
*do
2016-07-06 18818, 2016
Gentlecat
no, you have to run them manually
2016-07-06 18825, 2016
ruaok
I'm headed out for the weekend -- back late sunday, realistically monday.
2016-07-06 18831, 2016
armalcolite
ruaok: those 500's which you were getting were due to rebase thing.
2016-07-06 18847, 2016
ruaok
ah, ok.
2016-07-06 18855, 2016
ruaok
those threw me and I lost track then.
2016-07-06 18803, 2016
armalcolite
ruaok: oh. it would be nice if we can devise a rough plan for the weekend
2016-07-06 18812, 2016
ruaok
that is my goal.
2016-07-06 18835, 2016
armalcolite
i pushed a few tests yesterday
2016-07-06 18843, 2016
ruaok
given how close you are to your overall goal, i would like to work on a performance improvement for the postgres setup.
2016-07-06 18812, 2016
armalcolite
that is good idea, its very important as well.
2016-07-06 18852, 2016
ruaok
the theory is that we want to keep the most important data in RAM.
2016-07-06 18816, 2016
ruaok
so, then the idea is to split the listen table in listen and listen_json.
2016-07-06 18834, 2016
ruaok
put the json into the listen table and then time and username into listen table.
2016-07-06 18856, 2016
ruaok
so all queryable bits are in the listens table, all the heavy data is in the listens_json table.
2016-07-06 18812, 2016
ruaok
with a foreign key between them.
2016-07-06 18825, 2016
armalcolite
so just move the json column to listen_json table?
2016-07-06 18834, 2016
ruaok
the acousticbrainz project does this -- so if you have any questions both Gentlecat and alastairp can help you while I am gone.
2016-07-06 18827, 2016
ruaok
raw_data -> listen_json
2016-07-06 18858, 2016
armalcolite
sure.
2016-07-06 18802, 2016
ruaok
and the other column will be a listen_id foreign key to listen.id
2016-07-06 18828, 2016
ruaok
this should allow the fetching of listens to be faster.
2016-07-06 18838, 2016
ruaok
does this make sense?
2016-07-06 18803, 2016
armalcolite
our idea is to reduce the date being fetched in each attempt so as to reduce load.
2016-07-06 18810, 2016
ruaok
I would suggest that you make a gist with the proposed SQL changes and get alastairp or Gentlecat to give you feedback on them before you commit them to git
2016-07-06 18814, 2016
armalcolite
yeah, its making sense.
2016-07-06 18824, 2016
ruaok
yes, pretty much that.
2016-07-06 18839, 2016
ruaok
reduce memory footprint is a better way of putting it.
2016-07-06 18843, 2016
armalcolite
sure, i'll gather their help before making final commits
2016-07-06 18857, 2016
ruaok
the idea is to keep all of the listen table in memory and then fetch data from listen_json as needed.