I need to pull several rows from a table and process them in two ways:
- aggregated on a key
- row-by-row, sorted by the same key
The table looks roughly like this:
table (
key,
string_data,
numeric_data
)
So I'm looking at two approaches to the function I'm writing.
The first would pull the aggregate data with one query, and then query again inside a loop for each set of row-by-row data (the following is PHP-like pseudocode):
$rows = query(
"SELECT key,SUM(numeric_data)
FROM table
GROUP BY key"
);
foreach ($rows as $row) {
<process aggregate data in $row>
$key = $row['key'];
$row_by_row_data = handle_individual_rows($key);
}
function handle_individual_rows($key)
{
$rows = query(
"SELECT string_data
FROM table WHERE key=?",
$key
);
<process $rows one row at a time>
return $processed_data;
}
Or, I could do one big query and let the code do all the work:
$rows = query(
"SELECT key, string_data, numeric_data
FROM table"
);
foreach ($rows as $row) {
<process rows individually and calculate aggregates as I go>
}
Performance is not a practical concern in this application; I'm just looking to write sensible and maintainable code.
I like the first option because it's more modular -- and I like the second option because it seems structurally simple. Is one option better than the other or is it really just a matter of style?
Even if perf is not an issue, your mind is. When a musician practices every movement is intended to improve the musician's skill. As a developer, you should develop every procedure to improve your skill. iterative loops though data is sloppy and ugly. SQL queries are elegant. Do you want to develop more elegant code or more sloppy code?
If performance isn't a concern, I'd go with the second. Seems the tiniest bit friendlier.
If performance were a concern, my answer would be "don't think, profile". :)