- Feb 7, 2001
- 1,637
- 0
- 0
All kinda theoretical at this point but...
Using MySQL currently, but that might change later, say I have a table, only about 4kb per record, but an anticipated number of records somewhere around 20 MILLION. Thats a lot of data.
Now, would it be better to break the table into multiple tables, say something like this:
table_a ( tables with the main record starting with the letter a )
table_b ( tables with the main record starting with the letter b )
table_c ( table.... etc you get the point
OR
Would it be better to just leave it in one HUGE table?
If it helps any, the only selects that are done select by the IDX of the record, the Primary key. So the select is just:
SELECT * FROM table WHERE myprimarykey = 233
If it was split into multiple tables, it would jsut change to:
SELECT * FROM table_" . $firstcharacter . " WHERE myprimarykey = 223
Any thoughts? How much faster would 26 smaller tables be than 1 huge table when selecting by index?
Using MySQL currently, but that might change later, say I have a table, only about 4kb per record, but an anticipated number of records somewhere around 20 MILLION. Thats a lot of data.
Now, would it be better to break the table into multiple tables, say something like this:
table_a ( tables with the main record starting with the letter a )
table_b ( tables with the main record starting with the letter b )
table_c ( table.... etc you get the point
OR
Would it be better to just leave it in one HUGE table?
If it helps any, the only selects that are done select by the IDX of the record, the Primary key. So the select is just:
SELECT * FROM table WHERE myprimarykey = 233
If it was split into multiple tables, it would jsut change to:
SELECT * FROM table_" . $firstcharacter . " WHERE myprimarykey = 223
Any thoughts? How much faster would 26 smaller tables be than 1 huge table when selecting by index?
