Devices that use SD cards identify the card by requesting a 128-bit identification string from the card. For standard-capacity SD cards, 12 of the bits are used to identify the number of memory clusters (ranging from 1 to 4,096) and 3 of the bits are used to identify the number of blocks per cluster (which decode to 4, 8, 16, 32, 64, 128, 256, or 512 blocks per cluster).
In older 1.x implementations the standard capacity block was exactly 512 bytes. This gives 4,096 × 512 × 512 = 1 gigabyte of storage memory. A later revision of the 1.x standard allowed a 4-bit field to indicate 1,024 or 2,048 bytes per block instead, yielding up to 4 gigabytes of memory storage.
Host devices designed before this change may incorrectly identify such cards, usually by misidentifying a card with lower capacity than is the case by assuming 512 bytes per block rather than 1,024 or 2,048.
For the new SDHC (2.0) implementation, 32 bits of the identification string are used to indicate the memory size in increments of 512 bytes. The SDCA currently allows only 26 of the 32 bits to be used, giving a maximum size of 32 GB. All SD cards with a capacity larger than 4 GB must use the 2.0 implementation at minimum. Two bits that were previously reserved and fixed at 0, now called the "CSD Structure", are being used for identifying the type of card, 0 is standard capacity; 1 is high (SDHC) and extended (SDXC) capacity; 2 and 3 are reserved. Older host devices are not aware of this new field thus cannot correctly identify SDHC or SDXC cards.
All SDHC readers are able to use standard SD cards,[32] and all SDXC readers are able to use SD and SDHC cards.