I can't produce any physical evidence, but there might be a slight performance hit when you have a PK (int/bigint) that is not used. However, it does not necessarily mean that you have a clustered index on that key. We usually have a few tables that have PKs that are hardly used in our datawarehousing applications... we simply don't create a clustered index on that column as long as there is another column that can be used for joining on. As I said, I can't give you the evidence, but my opinions are based on huge tables that I deal with on a daily basis. Regarding the page size issue, I don't think that matters, too (unless you're dealing with varbinary-like data types)... you can always play around with things like the fill factor and set them according to your preferences (again, this is specific to SQL Server) to tweak the performance.
A composite key is definitely a pain in terms of performance and size implications... most likely, you'll have your index covering both the columns, which means a bigger performance hit when page split occurs as well as a bigger index size. Also, in the OP's case, the index can't be unique, which means more work for the underlying algorithms of the DB engine when it comes to sorting and hashing data. Having an auto-incrementing PK, which is unique, but not used is perfectly fine - many people (including me) have been burned around schemas that didn't contain such columns at later points in time when it came to maintaining the data. It is better to have a redundant PK now and use later as opposed to not have a proper PK now and then getting burned later when you need - in most such scenarios, a schema change would be ugly.
I'd also argue that your design/normalization could be flawed if you have a table whose PK is not being referenced anywhere within your schema. Unless, of course, it is a datwarehousing-type of schema where denormalization is norm. Most data warehouses have a PK that is not being used anywhere else except to perform incremental loads and things of that nature.