I have a database that comprised of approximately 20 million rows and about 25 columns. 23 rows are just numbers/text, 2 rows are blobs with the an actual file stored inside between 1-400KB in size for one column and the other is consistently about 10KB. Of the 23 number/text columns, 11 of them have a fixed number of possible values between 4-10 options.
My question is should I just slam everything into one 20Mx25 table or should I break it up into tables, if so in what way? I don't really care about database size as much as I do about database performance.
We will need to run queries to figure out how many rows have property X in column 5 and property Y in column 6 and things like that.
We'll also be pulling the files out and analyzing them, and writing the results, duration, important markers into another table.
My concern with different tables would be that having to do joins or whatever might affect performance if we need to run 20 million queries to do a particular processing on each of the files.
My question is should I just slam everything into one 20Mx25 table or should I break it up into tables, if so in what way? I don't really care about database size as much as I do about database performance.
We will need to run queries to figure out how many rows have property X in column 5 and property Y in column 6 and things like that.
We'll also be pulling the files out and analyzing them, and writing the results, duration, important markers into another table.
My concern with different tables would be that having to do joins or whatever might affect performance if we need to run 20 million queries to do a particular processing on each of the files.