• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

selecting using results as table name DB2

Is there a way to run a select statement to get a set of table names and then query using those results in a statement?

This might sound a bit confusing. So, I have a database set up to accommodate hundreds of millions of files and it creates it's own tables at a certain point (when a table gets greater than 10 million rows, it creates a new table). Currently, we have around 5640 tables and we need to check basically each table (there are a few system tables and such, so we can skip those). I can easily generate a list of table names, but I really don't want to run 5640 individual selects.

Is there some way to use those results as the table?

I've attempted to do this using a temp table, but unfortunately, my SQL is weak. And I don't know if there is some kind of equivalent of tsql type of thing on DB2. Maybe using FINAL TABLE?

What I have so far:
Code:
select distinct arsseg.table_name from sa2 inner join arsag
on sa2.load_agid = arsag.agid
inner join arsseg
on arsseg.agid = arsag.agid

That returns the table names, I just want to use that as the from and query based on something within each.

Thanks,
 
If I have multiple similar tables and want to query each one I might create a view and use UNIONs for each table.

Code:
CREATE VIEW ManyTableView
AS

Select * from table1
UNION 
Select * from table2


Select * FROm ManyTableView


I have never done 5640 tables though. I think it may be time to change the architecture.  With so many tables it must be slow by now.  You might create some new servers and query each server individually.  Or you could use a document DB, AKA NoSQL DB, like MondoDB as you can spread the load between multiple servers.
 
it seems like really bad design to have to query thousands of tables. That's simply not going be performant or scalable. Splitting large amounts of data into separate tables is a good strategy but you need to be able to cheaply determine which table holds the data you need.

I have a database with similar problems and I break out a new table every 10 million rows. I have an incrementing id so I know the data is in tableX where X = id / 10 million. Can you do something similar?

By the way, storing files in a SQL database is less than ideal. You might look into migrating your data into some kind of key-value blob store.
 
it seems like really bad design to have to query thousands of tables. That's simply not going be performant or scalable. Splitting large amounts of data into separate tables is a good strategy but you need to be able to cheaply determine which table holds the data you need.

I have a database with similar problems and I break out a new table every 10 million rows. I have an incrementing id so I know the data is in tableX where X = id / 10 million. Can you do something similar?

By the way, storing files in a SQL database is less than ideal. You might look into migrating your data into some kind of key-value blob store.

You can send your complaints to IBM. This is for a system they sold us. It is designed specifically index and store millions of line data items. We just happen to have 5600+ different reports on which to store data for.

Having 50 million rows in a single table also wouldn't work for this kind of system. Each report is index on different values with differing rows and must each be individually managed.

Anyway, it doesn't matter as far as the query is concerned. I just made someone actually query each table individually.
 
Back
Top