I was trying out some different encodings and as I tried to encode some text repeatedly in base64/base32 (which one is used depends on a semi-random boolean list) I noticed that it was ridiculously slow which I didn't understand because I thought that they were particularly fast. I can't really figure out why it's so slow, it'd be cool if you could help me.
This is the part of the code concerned :
from base64 import b64encode, b32encode
from random import random as rn
big_number = int(input("The number of encoding layers : "))
bool_list = [True if rn() < 0.5 else False for _ in range(big_number)]
sample_text = bytes("lorem ipsum", "utf8")
for curr_bool in bool_list:
temp = b64encode(sample_text) if curr_bool else b32encode(sample_text)
sample_text = temp
Memory and time expensive operations. The answer is based on pivotal comment (by Wups):
The following modified script shows growing ratio for
base64andbase32encodings:** Sample output** (truncated):
python .\SO\71009943.pySo resultant length is roughly somewhere between Compound interest for base64's
4/3and base32's1.6:and for a bigger numbers:
and