I come from Ruby, and have sort of adopted the methodology of single responsibility principle, encapsulation, loose coupling, small testable methods, etc., so my code tends to jump from method to method frequently. That's the way I am used to working in the Ruby world. I argue that this is the best way to work, mainly for BDD, as once you start having "large" methods that do multiple things, it becomes very difficult to test.
I am wondering if there are any draw backs to this approach as far as noticeable differences in performance?
While not necessarily Object-C specific, too many method calls that are not inlined by the compiler in non-interpreted or non-dynamic-dispatch languages/runtimes will create performance penalties since every call to a new function requires pushing a return address on the stack as well as setting up a new stack frame that must be cleaned up when the callee returns back to the caller. Additionally, functions that pass variables by-reference can incurr performance penalties since the function call is considered opaque to the compiler, removing the ability for the compiler to make optimizations ... in other words the compiler cannot re-order read/write operations across a function call to memory addresses that are passed to a function as modifiable arguments (i.e., pointers to non-
const
objects) since there could be side-effects on those memory addresses that would create hazards if an operation was re-ordered.I would think though in the end you would only really notice these performance losses in a tight CPU-bound loop. For instance, any function call that makes an OS syscall could actually cause your program to lose it's time-slice in the OS scheduler or be pre-empted, which will take exorbitantly longer than any function call itself.