To quickly answer the OP: No, sampling rate will not likely affect your perception of bass because the waves require less detail to accurately reproduce.
The following applies to "regular" PCM audio:
Sampling frequency absolutely has an impact on resolution. In terms of accurately depicting a waveform, imagine it being visualized on an LCD screen. In a perfect world, sample rate is your horizontal resolution. Bit depth is your vertical resolution. More on this later.
That said, from my own subjective experiments in making recordings at different sampling rates and bit depths, I can say that moving to 24-bit from 16-bit makes a bigger difference in perceived sound quality than increasing sampling rates. Regardless of sound/sampling frequency, an increase in bit depth will always yield a more accurate representation of the amplitude of the wave at each sample's moment in time. Because we are talking about a finite dynamic range, all of which is theoretically audible to the human ear, increasing resolution within that range should be noticeable as long as your ears and brain are good enough to capitalize on it.
But a 44.1 kHz sampling rate is capable of capturing a wave up to 22.05 kHz, and normal adult humans can't even hear that high. If you assume completely error and jitter-free conversions, then 44.1 kHz is theoretically enough to provide all the resolution necessary to capture the entire range of human frequency perception. Therefore higher sampling rates do not increase our perceived resolution of the audio. Theoretically, that is.
Go back to the LCD TV analogy. Raising sampling rates would be akin to increasing the horizontal resolution. But imagine that on top of the screen, there are little strips of black paper that go right in between each column of pixels at "44.1 kHz." The strips represent the frequency limits of your hearing. The strips don't cover any pixels at 44.1, so you can still see each and every pixel on the TV. But now double the resolution to "88.2 kHz." Now there is a second column of pixels underneath each strip of paper. But because the paper is there, you don't see the new pixels. The picture looks exactly the same as before. Unlike increased bit depth, the increase in frequency resolution was all made outside the limits of your perception.
HOWEVER, the above analogy makes assumptions that are NOT true in the real world. It assumes that the converters and clocks are 100% accurate and error free. This is not true, and varies depending on the quality of the components. Go back to our TV. We can simulate jitter and errors by randomly moving our columns of pixels. Now some of the columns of pixels are being partially covered by the paper strips, and there are places between the strips that aren't fully filled by pixels. Even to our eyes, the TV doesn't look as clear as it should anymore. That's a real-world playback of a 44.1 recording. Jump again to 88.2 kHz. Now, because we have doubled the resolution, all those gaps and errors are filled in with replacement pixels. A column of pixels may be a bit off, but that new second column is there to fill in the gaps and smooth the mistakes. Even though the TV technically has reproduction errors, it still looks great to us, because the errors are now too small for us to perceive.
That's why in the real world, and especially with lower-grade recording and playback equipment, a higher sampling rate DOES improve sound quality. It reduces our perception of errors in sampling timing and accuracy by making them smaller than we can perceive. It is also why upgrading converters, or even just upgrading the CLOCK that controls the converters (improving timing accuracy) can have the same impact on perceived quality. That's why they make clocks for recording studios that can cost thousands of dollars alone. If you have errors, you can either get rid of them or make them smaller, and either way, it will sound better.