So I'm reading that the knapsack problem's time complexity is exponential because it is O(nW) and the time increases exponentially with respect to the length of the bitstring of W.
But if this is the case, doesn't that mean that an algorithm that takes as input an integer N and prints every number between 0 and N also runs in exponential time (as opposed to linear), with respect to the length of the bitstring of N?
That is correct. An algorithm taking as input an integer N that prints all the integers from 0 to N would run in time exponential in the number of bits of N. So when we say an algorithm is "polynomial" or "exponential", what we means depends on what unit we are using.
Typically, the unit corresponds to the size of the input (which explains why we measure in terms of the number of bits in the knapsack case). The reason your second example seems weird is that normally when we need to iterate over N things, the size of the input is also of order N. For example, to sum N numbers, we need to read in N numbers, which dominates the amount of computation it takes to read the number N itself.
By the way, we also typically think of reading in numbers as taking constant time, but this isn't completely accurate, since it takes logarithmically many bits to express a number. However, for practical purposes, it doesn't matter very much, because computers usually have hardware that handles 1 the same way as 1000000000. There is some relevant discussion here: http://en.wikipedia.org/wiki/Random-access_machine.