Full format creates a fresh map of the drive where bad sectors are hidden, etc., so they can't be used.
Well, a bad sector first has be known. If a sector is 'bad' but the harddrive does not know it - meaning it never failed a READ request - then overwriting that sector would not uncover the issue. Even worse, it would destroy any evidence of there ever been a bad sector.
If a bad sector is known, meaning it was at least read once and failed, then the harddrive will show Current Pending Sector > 0 in the SMART-output. In this case, if you overwrite the bad sector, the harddrive will actively READ that same sector afterwards, and check whether it is actually readable. If not, it will replace the sector with a reserve sector. But it will not do this test if the harddrive is unaware of the weak/bad sector.
Additionally, old filesystems like FAT and NTFS have a special file $BadSect$ which stores sectors that the filesystem should not use. This is extremely outdated, since for many decades harddrive replace bad sectors themselves. The filesystem should not mimic this task, which is redundant. Newer filesystems have no such feature, but are designed to cope with individually unreadable sectors (ZFS, Btrfs, ReFS).
So for a new harddrive, reading the entire surface actually uncovers bad sectors, but simply writing does not. Even worse, after writing, unreadable sectors may become readable again - so called uBER bad sectors that are only unreadable due to insufficient error correction and not because of physical damage. After (over)writing these, all evidence disappears from the SMART data; Current Pending Sector is substracted but Reallocated Sector Count stays the same.
Stress testing with some kind of checksum utility, like the ZFS filesystem, would have my preference to validate new harddrives.