Once the sample size of yes/no questions gets large enough, the distribution approximates a "bell curve type sample". It ends up being the same calculation with a couple differences.
The mean you used in your "bell curve type sample" is the proportion you expect to see answer "yes", we'll call that p_hat. Your standard error can be calculated as the sqrt [p_hat*(1-p_hat)/n] where n is your sample size. Or in other words, the percentage of people who answered yes times the percentage of people who answered no, divided by the number of people sampled, and the square root of all of that.
So your standard error is largest when you expect the proportion to be 50/50. Using that estimate, you can figure out the worst case situation. For a sample of 1,000, your margin of error (with 95% certainty) ends up being around plus or minus 3.1% (come polling time, you'll see this number or one close to it a lot). Gallup's margin of error is higher due to the fact they do some weighting and sample design variation.
Two things to note. First off you should notice that the margin of error calculation is not affected at all by the actual population size. Whether you're trying get the answer from Wyoming residents (with 0.18% of the US population), from Cali residents (with nearly 12% of the US population), or from the entire US, 1,000 people properly sampled will give you comparable margin of errors. If 40% believe something, then randomly sampling 1,000 people will still get you around 400 people that believe it, whether it's 1,000 out of 500k people, 1,000 out of 30 million people, or 1,000 out of 300 million people. Conversely, even if Gallup sampled a thousand times more people than they did, they still would have gotten around 40%. That's just how percentages work.
The other is a little more stats geeky. The Normal distribution fails to be a good approximation of the binomial distribution when the proportion is very close to zero or one. If less than half a percent believed, then 1,000 people wouldn't be a large enough a sample to use a normal approximation. This should make sense on an intuitive sense as that'd mean that you'd be seeing less than five people out of the thousand with a different answer.
Stats lesson over. Back to your regularly scheduled programming.