I'm trying to create a django custom encryption field using Fernet. The libraries I found for doing it automatically seems to be outdated/non compatible with Django>3.0
In this thread I found the following code:
import base64
from django.db.models import CharField
from cryptography.fernet import Fernet
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC
from core import settings
class SecureString(CharField):
salt = bytes(settings.SECURE_STRING_SALT, encoding="raw_unicode_escape")
kdf = PBKDF2HMAC(algorithm=hashes.SHA256(),
length=32,
salt=salt,
iterations=100000,
backend=default_backend())
key = base64.urlsafe_b64encode(kdf.derive(settings.SECRET_KEY.encode('utf-8')))
f = Fernet(key)
def from_db_value(self, value, expression, connection):
return str(self.f.decrypt(value), encoding="raw_unicode_escape")
def get_prep_value(self, value):
return self.f.encrypt(bytes(value, encoding="raw_unicode_escape"))
It seems to work for the encoding part (if I check the database field with a manager program, content is show as a sort of chinese characters), but not for decoding.
Everytime I save a record in Admin, this error is triggered (although saved in database):
raise TypeError("{} must be bytes".format(name)) TypeError: token must be bytes
Aren't supposed to be already as bytes in the database record, due to get_prep_value code? The same error happens when trying to list records in admin (accessing the reading function).
How can I solve this? I am using SQL Server as database (in case it might be relevant).
It seems the problem is related to database collation (Modern_Spanish_CI_AI, cp1252). I have to convert the string from and to this collation in order to work. Not sure if this is necessary due to database being SQL Server instead one of the natively supported ones.
This works. The data is encrypted in db, but as standard characters and not the chinese ones. I guess this should have been a clue for me in order to see that collation could be the issue.