While running simplecov on Mac OSX, the resulting coverage makes little sense.
If the following test is run:
rails test test/models/channel_test.rb
> 4 runs, 4 assertions, 0 failures, 0 errors, 0 skips
> Coverage report generated for Minitest to /Volumes/[...]/coverage. 0 / 0 LOC (100.0%) covered.
Yet when running rails test test/models the graphical output indicates for test/models/channel_test.rb
require "test_helper"
class ChannelTest < ActiveSupport::TestCase
test "invalid if name not defined" do
channel = Channel.new(priority: 1, unit_cost: 1, daily_limit: 9999)
assert_not channel.valid?
assert_not channel.save, "Saved the channel without a name"
end
update I presumed it might have been the chosen syntax of the test that might be a flaw - I added a supplemental test and the result is still reported
for the model 3 relevant lines. 0 lines covered and 3 lines missed.
class Channel < ApplicationRecord # red
validates :name, presence: true # red
end # red
Thus the test is passing, but the coverage result is confounding:
a) as a standalone, the coverage count is 0/0, whereas the tests pass
b) what constitutes a miss or conversely coverage by a (the?) test ?
test_helper.rb
require 'simplecov'
SimpleCov.start
ENV['RAILS_ENV'] ||= 'test'
require_relative "../config/environment"
require "rails/test_help"
require 'webmock/minitest'
class ActiveSupport::TestCase
parallelize(workers: :number_of_processors)
fixtures :all
def log_in_as(user, shop)
post user_session_url, params: { user_id: user.id, active_shop_id: @site.shop_id }
end
end
Update 2 As per @BroiSatse suggestion, commenting out parallelize(workers: :number_of_processors)
allows coverage to be measured.
Thank you, this question was my issue too. All credit to BroiSatse's comment.
I added the following lines to test_helper.rb, which maintaines the speed benefit of parallelized tests instead of simply commenting out the line. It is based on the link from BroiSatse's comment:
Following this my test coverage reported by SimpleCov "improved" from 3.8% to over 90%.