ID DB Fields as BigInt instead of Int

Silverstripe Version: 4.1


Is it possible to cast the database ID fields as BigInt instead of Int?

I tried to override my DataObjects like so:

	private static $db = array(
		'ID' => 'BigInt',
		/* other fields... */

And I added an extension to mysite.yml:

    - CustomSiteTree

…to override the SiteTree class (although this doesn’t address the other SiteTree_* db tables) like so:

        use SilverStripe\ORM\DataExtension;

		class CustomSiteTree extends DataExtension {		     
		    private static $db = array('ID' => 'BigInt');

…and I’m extending Member class with another extension. But I had to manually enable AUTO_INCREMENT on all of these ID fields via phpmyadmin, and running /dev/build removes the AUTO_INCREMENT.

Is there a better way to approach this? (I have reverted my database and code back to working condition.)

Thank you.

Hi @crow, I don’t know if this is possible but I’m curious about why you would need to? Int should be suitable for up to 2.15 billion records on any table… do you need more than this?

DataObjects don’t actually use ‘Int’ (DBInt) for their ID, they use DBPrimaryKey which extends from DBInt - it’s in the DBPrimaryKey class where the auto increment is activated. You could try creating your own version of DBPrimaryKey by duplicating that class, naming it something like BigintDBPrimaryKey, and simply extending DBBigint instead of DBInt. Then to activate it, use the Injector to use your custom class instead of DBPrimaryKey:

    class: BigintDBPrimaryKey

I don’t know if this will work but worth a shot :man_shrugging:

Thank you for the quick response. You’re absolutely right, in all likelihood I will not need more than 2.15 billion rows in a table. But I like the idea of being able to support more. If I finish this project, I would like to launch it in a “high availability” environment. The site will be taking a lot of user input, and with enough popularity over a course of several years, there is a minute possibility that tables could exceed 2.15 billion rows (I know this isn’t likely.)

Before taking the approach above, I did modify DBPrimaryKey to extend DBBigInt instead of DBInt, but this yielded no changes when running /dev/build/?flush=all. However, I did make this change in my final solution because it might be used outside of building the database. (I like your idea of copying the class and adding it to the .yml file.)

Changing DBForeignKey to extend DBBigInt does yield changes when running /dev/build. Modifying class MySQLSchemaManager in SilverStripe\ORM\Connect is what changed the primary keys (and is where the auto_increment is activated.)

My final solution required changing nine framework files. Running /dev/build after the changes took a WHILE, but after the changes /dev/build completes at a more normal pace. Everything seems to be working fine, however, this will require more testing and measuring of performance. If I decide to keep these changes, I will have to maintain the framework modifications when upgrading. I can post the solution if you or anyone else is interested.

@Tim summed up pretty much what I was going to say. If the possibility is only minute I would say it’s not worth investing time in solving a theoretical problem now, especially if it carries a risk of side effects and adds a maintenance burden. If you’re fortunate enough to be on a trajectory towards hitting those limits, you can surely justify upgrading the project at a later date to allow for higher record limits.

At a rate of 10 inserts per second, every single second, 24 hours a day that’s nearly 7 years of operation. I’d suggest that if you are dealing with data at that level, MySQL isn’t the right platform to use anyway.
I like the “because I can” element of this, but I’m not convinced that it’s a good course of action if you’re seriously considering that kind of data storage.

1 Like