• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

SQL Table Design

drebo

Diamond Member
Pretty self explanatory...

If you were designing a database for rates, for instance, would you favor a table with a large number of rows (one for each rate) or a wider table with fewer rows?

A telecom rate table would be a fairly rudimentary example...You could organize a single large table with a column for each LATA and each row would be your OCN or you could arrange in such a way that each OCN/LATA combination had it's own row.

Which way would you rather do it?
 
Adding columns for each rate would just be bad design. It'd be better to add new records instead of modifying the design of a table just to add a new rate.
 
I agree with Argo. The design is not perceptible in your description, but given what you've said I think the entity decomposition of your problem domain needs to be thought on some more.
 
Have you never seen a rate table before? There are X number of rates per Y number of entities. Both are finite and unchanging, however the rates themselves may change.

My question is whether to break it down to where every XY combination gets its own row or whether there is only one row per entity which contains all rates applicable to that entity. I don't know how I could get any more descriptive or simple than that.
 
If the number is really fixed in both dimensions then I doubt it matters tremendously. I guess finding a given combination of x/y would be faster in the first case, but unless there are a ton of them it probably isn't significant. I usually look at table relationships from an extensibility and maintainability standpoint long before I think about performance, but again, if your rates and entities are really fixed I can't come up with a strong argument in either direction.

And by the way, "rate table" is not like "linked list." It's a domain-specific concept, and yes it's likely a number of us have never built one exactly like you are trying to build. You can almost never be too descriptive, imo.
 
Originally posted by: Markbnj
If the number is really fixed in both dimensions then I doubt it matters tremendously. I guess finding a given combination of x/y would be faster in the first case, but unless there are a ton of them it probably isn't significant. I usually look at table relationships from an extensibility and maintainability standpoint long before I think about performance, but again, if your rates and entities are really fixed I can't come up with a strong argument in either direction.

And by the way, "rate table" is not like "linked list." It's a domain-specific concept, and yes it's likely a number of us have never built one exactly like you are trying to build. You can almost never be too descriptive, imo.

QFT. I have no idea what LATA and OCN means except something to do with telecom.
 
In my experience, even if someone says it's finite and unchanging... it still changes so it's better to build it in a way that lends to extensibility. Then again, without knowing specific industries or applications involved it's kinda hard to make arguments either way.

We lease our software on a monthly basis and just use two tables to track all the pricing. One table for software information, and then another table that links to the primary key(sofware_id) on the software table with the different fees/rates. It changes constantly as new features are added, more support options get added, etc etc so I find our way to be simple and easy.

I can't think of a normal case where the data needed to be stored will be so large that it really matters anyways from a performance standpoint anyways.
 
Well, the difference is a table that has 3000 rows vs. a table that has 45,000 rows with lookups being EXTREMELY frequent. I guess I'll try it the first way and then change it if I have problems.
 
Can you give us an example as to what the data looks like? I am thinking this could be a many-to-many relational mapping (based on your "lookups" statement above), but maybe I am over-engineering.
 
I did end up doing the many-to-many with one really long table as opposed to a wide table.

Basically, I have a set of 3000 criteria which are matched against. Each of those 3000 criteria has 13 different rates, based on a type. I could have gone with a 15 column table, as the types will never change.

However, I ended up going the other route, with a 4 column table which contains the PK (rateid), two foreign keys (one to type and one to criteria), and then the rate itself. Then I have two other tables, one which just has the 13 types as rows, and the other which has the 3000 criteria as rows. I think this will work fine...and I don't think my queries will be any more complicated than doing it the other way.
 
I would create a table with a row for each LATA_OCN_ID and rate.

lata_ocn_id,
rate,
date_start,
date_end

Would track your slowly changing rate dimension and the records that have date_end as null would be the current rate for each lata_ocn_id.
 
Back
Top