• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Mysql row count limit

Haden

Senior member
Is it possible to get more than 2^32 rows in mysql table on x86 hardware?
{maybe postgresql?}

I would rather avoid having multiple tables and doing multiple selects to overcome this problem.
 
According to my book, MySQL has an 8 byte int (BIGINT). Not sure if you can have that many rows or not, but if you've got > 4 billion rows in one table you might want to reconsider some things abyway.
 
Armitage - setting MAX_ROWS when creating table resets to 2^32, not sure what can be considered about table thought.
As for table - it's very primitive for loggin new connections, but it fills up in about two weeks and considering network grows situation will get worse:

CREATE TABLE ulog (
local_time INT UNSIGNED NOT NULL,
oob_prefix CHAR(6) NOT NULL,
ip_saddr INT UNSIGNED NOT NULL,
ip_daddr INT UNSIGNED NOT NULL,
ip_protocol TINYINT UNSIGNED NOT NULL,
ip_ttl TINYINT UNSIGNED NOT NULL,
tcp_sport SMALLINT UNSIGNED,
tcp_dport SMALLINT UNSIGNED,
INDEX(local_time)
) MAX_ROWS=500000000000;
 
Originally posted by: Haden
Armitage - setting MAX_ROWS when creating table resets to 2^32, not sure what can be considered about table thought.
As for table - it's very primitive for loggin new connections, but it fills up in about two weeks and considering network grows situation will get worse:

CREATE TABLE ulog (
local_time INT UNSIGNED NOT NULL,
oob_prefix CHAR(6) NOT NULL,
ip_saddr INT UNSIGNED NOT NULL,
ip_daddr INT UNSIGNED NOT NULL,
ip_protocol TINYINT UNSIGNED NOT NULL,
ip_ttl TINYINT UNSIGNED NOT NULL,
tcp_sport SMALLINT UNSIGNED,
tcp_dport SMALLINT UNSIGNED,
INDEX(local_time)
) MAX_ROWS=500000000000;

A table like that should be archived on a regular basis. Querying a table that large will cripple your database system, and it really should not be necessary to keep connection logs in the same table indefinitely.
 
MrChad, when approaching limit I see no problems with performance (all queries select by time).
I don't want to keep logs indefinitely - but one month is minimum and I would need maybe 2^34 for that.
 
You get 3550 new connections per second 24/7?? And MySQL keeps up with that? Because that's what it would take to fill that table in two weeks.

On a fairly respectable system, I set up a table with just a single in column, and wrote a python script to push 1,000,000 values into it ... I got about 7352 inserts/second (6369/sec with an index on that int). So I suppose it's possible ... but wow.
 
Originally posted by: Haden
MrChad, when approaching limit I see no problems with performance (all queries select by time).
I don't want to keep logs indefinitely - but one month is minimum and I would need maybe 2^34 for that.

You can keep the logs indefefinitely ... but if it's older then 2 weeks, you have to go to the archive. Which could still be a table on the database, just not the current live table that the logging is currently going to.

Could just be a simple script like this:

CREATE TABLE ulog_timestamp SELECT * from ULOG WHERE local_time > timestamp;
DELETE FROM ulog WHERE local_time > timestamp;
 
Armitage, more like 2400, to be exact it takes around 19-21 days to fill it.

Anyway, thanks for your time, I've already done solution with several tables just that it seems like a hack to me.
 
Originally posted by: Haden
Armitage, more like 2400, to be exact it takes around 19-21 days to fill it.

Anyway, thanks for your time, I've already done solution with several tables just that it seems like a hack to me.

I'm still curious what generates 2400 logins/seconds! Seems that you'd have bigger guns then MySQL available for something with that kind of traffic.
 
It's dedicated box for logging somewhat large network connections. That data should actually be queried pretty rearally, based on some reported incident time.
As for using mysql - well I'm in testing phase right now and it worked fine at first.
Then again, what started as "holds 2 weeks" is actually more like "holds more than two weeks and I've done lots of portscanning to test things out"...
 
What table type are you using? Probably MyISAM. You might try InnoDB ... have no idea if it matters, but might be worth trying.
 
I just came across something that might help you out. Look up MySQL MyIsam Merge Tables. It lets you define a table consisting of several sub tables.
It might not help with the row count limit, but could make the split up log tables easier to deal with.
 
Back
Top