Does it really matter if a game is using all the available memory on a video card? Is this not the reason why we have such high memory bandwidth on todays high end cards? I guess what I am asking, is what is that bandwidth being used for?
It partly is, but most of it is for reading the data into the GPU and then pushing it back out.
The GPU will need to convert the 3D space into a 2D space, then do some basic fill of the 2D space's rendered surfaces (there are plenty of ways to handle this with DX10+ and OpenGL, and I only grok a handful of them

), add textures to that 2D space, do lighting to that 2D space, apply shaders on parts of that 2D space (which may themselves use additional textures, and have secondary buffers, and may be part of the lighting/shadow), draw shadows in that 2D space...and that could end up needing many steps to perform. It'll be working on layer after layer after layer of buffers of your screen res or thereabout (more with anti-aliasing, depending on type and game engine), and needs to be able to have enough bandwidth that the GPU's various computational units won't be twiddling their thumbs, waiting on new data, very often.
The total
amount of memory
is primarily for textures, since they tend to be kind of large, compared to anything else loaded into VRAM. Generally, games that are mod-friendly can warrant larger VRAM sizes than others, since hi-res textures tend to be an easy way to improve the look, though they're a woefully
inefficient way to do it.
I assumed it was used for swapping new/old textures in and out as the scene changes. Is it game engine dependent? Do certain game engines load everything into video ram, even if that texture is not being displayed on screen?
It varies, and is very much game dependent, along with game engine dependent. Most engines do it with some combination of an as-needed basis, and/or pre-loading textures and models as you approach an area.