Optimization of sum in PostgreSQL
Consider this situation: a statistical table and column identifiers and column counters. You want to sum the counters for a certain subset. In this case we are not interested in how we choose our lot — about indexes and protezirovanie written many books and articles. We assume that all the data already selected the most optimal way and learn how to quickly summarize . This is not the first place that should be optimized, if the query is slow, probably the latter. The following ideas are meaningful to apply when the execution plan (explain) has mean perfect and it the mosquito nose will not undermine, but I want to "squeeze" a little more. Will make a test table and write in it 10 million records: the create table s ( d date, browser_id int not null, banner_id int not null, views bigint, clicks bigint, primary key(d, browser_id, banner_id) ); insert into s select d, browser_id, banner_id, succ + insucc, succ from ( select d, browser_id, banner_id, (array[0,0,50,5...