data TestEnv = TestEnv
{ rateLimiter' :: !TokenBucket
, apiManager :: !Manager
, apiKey :: !BS.ByteString
}
type BunnyReaderT m = ReaderT TestEnv m
class MonadIO m => HasBunny m where
runRequest :: Request -> m (Response BSL.ByteString)
applyAuth :: Request -> m Request
fetchAuth :: m BS.ByteString
applyAuth req = do
apiKey <- fetchAuth
return $ req { requestHeaders = ("AccessKey", apiKey) : requestHeaders req }
fetchAuth = liftIO $ BS.pack <$> getEnv "AccessKey"
Instance MonadIO m => HasBunny (BunnyReaderT m) where
runRequest req = do
config <- ask
authReq <- applyAuth req
let burstSize = 75
toInvRate r = round(1e6/r)
invRate = toInvRate 75
liftIO $ tokenBucketWait (rateLimiter' config) burstSize invRate
liftIO $ httpLbs authReq (apiManager config)
fetchAuth = do
config <- ask
return $ apiKey config
type TestM = ReaderT TestEnv IO
Instance HasBunny TestM where
... -- to be defined
This is my code snippet for implementing a readerT monad that implements HasBunny typeclass in which the runRequest function can handle parallel api calls with rate limiting.(need reviews if proper rate limiting is applied or not it can only handle 75 request per second)
How do I define the another ReaderT class which implements the same type-class for the test suite, so that network calls can be mocked out
Basically having a TestM monad that behaves differently from BunnyReaderT monad and making sure The test passes with following assertion ---
that only 75 requests per second are made even if a total of 750 concurrent requests are made.
I'm stuck on this problem for a while any help or leads would be highly appreciated.
I'm basically stuck need reviews if my token-bucket implementation is right in rate limiting it to 75 calls per second moreover need help in implementing instance for TestM monad
This is a somewhat complicated design problem, but let me work you through a simple example. Note that there are lots of minor design decisions to make along the way. Because this is an SO answer and not a 10-part blog post, I've avoided talking about all the different alternatives, so this answer shows one way to do it, certainly not the only way, and not necessarily the best way.
A Simple Example
For this answer I'm going to consider a much simplified problem. Suppose we have a program that prints "foo" and "bar" with a configurable delay in between:
and we'd like to test it, to make sure the correct strings are printed with the correct timing.
Mocking out
putStrLnTo start, maybe we only want to mock out the
putStrLn. We can define a single monad type class for our application,MonadApp, with a method for the call we want to mock out, renamed toappPutStrLnto avoid a clash withputStrLnfromPrelude. AnyMonadAppwill also need to be aMonad, so that should be a superclass. In addition, for the monadic effects we don't want to mock out (e.g., accessingConfigand performing athreadDelay), including them as superclasses results in the least disruptive top-level type signatures when we rewrite our functions to use a generalMonadAppmonad:Rewriting
fooBarto use this class, we get:In order to run
fooBarin production mode, we need to define a concrete monad that implementsMonadAppwith the original implementation ofappPutStrLn. For this purpose, we'll use anewtypefor our monad:When we try to define an instance for it:
we'll get errors about missing instances for
Monad,MonadReader Config, andMonadIO. Even thoughReaderT Config IOsatisfies all these constraints, thenewtypedoesn't by default. Using theGeneralizedNewtypeDerivingextension, we can derive these automatically:which allows us to write:
It's also helpful to define a "runner" for the
Appmonad:The resulting
mainfunction for production is:For the test monad, we want to mock out
appPutStrLnso it creates a log of what was printed and when, so we can check whether the right things were printed with the right timing. We'll do this using theRWSmonad, with aReaderfor theConfigand aWriterfor a test log:The test monad itself is defined using a newtype:
an instance that performs the logging:
and a runner:
The
mainfunction for testing can use a shorter delay, to speed the test. It returns theTestLog, which the test scaffolding can inspect to determine if the output was correct.The full resulting program is:
and running it in test and production mode yields:
Using a Different Reader Context
You also asked whether you needed to use the same context (
TestEnv, in your example) for both the production and test monads. No, you don't. If, for example, you wanted to add some testing-specific configuration, like a flag indicating whether theappPutStrLnshould actually print its output (in addition to logging it) when running in test mode:then the way you'd do this is by "mocking out" the
askscall to fetch from theConfigpart of the context:and rewriting
fooBarto useappConfigin place ofasks:The
Appmonad would only contain aReader Config, as before, since it doesn't need/use the extraTestConfigcontext:and you'd just need to update its
MonadAppinstance with an appropriate definition forappConfig:The
TestAppmonad, on the other hand, would be modified to read from bothConfigandTestConfigcontexts:with an appropriate definition of
appConfigin its instance, plus an updatedappPutStrLndefinition:and appropriately updated runner:
Now you can run the test quickly and quietly:
or with realistic output and delays (while still generating a test log)
giving:
The complete program with this modification:
Mocking out IO
Finally, you could also consider completely mocking out all the
IO, including thethreadDelaycalls. This would allow you to run a "pure" test that simulates the passage of time, allowing you to run time-based tests much faster, without having to decrease delays and/or relax rate limiting.The resulting complete program might look something like this: