They are not direct competitors. The use case for V100 is vastly different to the TPU (they happen to overlap in Tensor based work). The TPU is pretty much the definition of an ASIC (application specific integrated circuit), in that it just does Tensor based work and thats it.
The V100 on the otherhand can be used for a variety of computations which also includes Tensor based work (it can do this stuff in parallel with FP64/FP32 work for instance). Perhaps post processing with the CUDA cores while running machine learning algorithm via Tensor cores. Not just deep learning but it can do other things which makes it different to the TPU.
And to my knowledge Google happens to be one of nVIDIAs HPC customers.
very true. Samsung does not have any databook in their website which mentions HBM2 but we know they have been supplying Nvidia with HBM2 for P100 for around a year now. Similarly Hynix is AMD's HBM2 supplier and I am pretty sure there are agreements between these two companies for HBM2 supply just as would be the case between Nvidia and Samsung.