Test case :
35000
-> a normalized scientific notation of the number would be 3.5 * 10E4
-> the engineering notation would be 35 * 10E3
A simple algorithm that does this would keep dividing the number by 10 until we get the notations required. However this would mean the algorithm would be O(number of zeros). Can we do better?
The classic paper on the subject of printing human-friendly representations of floating point numbers can be read here. It's much too complex to be discussed as code in an answer here.