Ever since we upgraded to Scala 2.13 queries that access the Date column in Cassandra fail with an exception: Unable to make unsigned int (for date) from: '1601856000000'
This value is obviously too big to fit in an unsigned int. And according to the Cassandra documentation, date is stored as an unsigned int representing days-since-epoch
But when I look into the phantom-dsl DateSerializer code I see it deliberately taking the millisSinceEpoch value from all the supported Date types, which is the value in the exception
Column:
object date extends DateColumn with ClusteringOrder with Descending
Query/method:
select
.where(_.userId eqs userId)
.and(_.date gte startDate)
.fetch()(implicitly, executionContext)
Phantom-dsl version 2.59.0
Already tried to use LocalDate (same issue) and representing the column as StringColumn, which works for the query if I format the date to YYYY-MM-dd, but then the value from the resultset cannot be parsed into a Date
What am I missing here?
After a while a colleague came up with a solution: if you define the column as com.datastax.driver.core.LocalDate it does work