Reserved space on the HDD not exposed to or accessible by the external interface (eg IDE/SATA) used internally only by the onboard microcontroller and firmware. Sector translation maps, SMART logs, spare sectors, etc, all stored here. A hard drive is a self containted computer in and of itself.
There is also "hidden" data for servo positioning feedback (track #, angle, spindle indexing, etc), error correction codes, etc, that is all part of normal operation. There is ALOT of data on the drive for each sector that isn't part of the user's 512 bytes that cannot be seen. IIRC the advertised capacity to the end user is really like 1/2 - 2/3 of the actual physical space. The rest isn't "wasted", just the necessary overhead for the device to even work the way it does in the first place. Network and serial bus speeds are the opposite; they always give raw physical wire speed and don't exclude the "hidden" huge protocol overhead with packet headers, addressing, error correction and delivery with TCP, command packets, etc.
Bad cluster info detected by chkdsk is marked in the file system metadata and is lost when partitions are deleted, but not when formatted unless you change the file system. Remember a cluster is a filesystem construct and has nothing to do with the underlying media. All chkdsk can do is write/read a cluster a few times and see if the data is consistent.
Bad sectors detected by the drive itself are logically remapped internally to hidden spares by the drives own firmware, marked in the SMART log as a sector reallocation event, and removed from user access, never to be seen again regardless what you do as an end user. The drive itself is much better at detecting bad sectors than a filesystem tool as it has access to raw media indicators like error rates, how many bits in error, signal to noise ratio at the head itself, etc.
Working hard drives with alot of remapped sectors will start to run more slowly as a result of the random access overhead frequently accessing the spares and sector map at another physical location for what should have been a sequential operation, with no other indication of failure unless you look at the SMART data and see the thousands of reallocations. And of course you can start to have high seek error rates independent of sector reliability problems where your data isn't at risk but the drive is slower than normal because the heads are "jittery" and take too long to zero in on a track (overshoot, undershoot, smaller overshoot, smaller undershoot, ah there it is.) You can tell these by what sounds like a slow lazy actuator and the sound of sand being crushed even when there isn't enough active disk IO to justify it.
Hard drives suck anyway, they belong in the bin of history with punch cards, 8 tracks, cassette tapes, floppy disks, and CDs. Who accesses their data under 10s of MB per second anymore in this day and age of GB and TB of data?