"Workerpool Worker terminated Unexpectedly" for Mocha tests in CircleCI

1.6k Views Asked by At

I have TypeScript tests running with Yarn and Mocha and they work fine locally. When I deploy via CircleCI, however, I get this:

1) Uncaught error outside test suite:
   Uncaught Workerpool Worker terminated Unexpectedly
  exitCode: `null`
  signalCode: `SIGKILL`
  workerpool.script: `/home/circleci/my-project/node_modules/mocha/lib/nodejs/worker.js`
  spawnArgs: `/usr/local/bin/node,--inspect,--inspect=43215,/home/circleci/my-project/node_modules/mocha/lib/nodejs/worker.js`
  spawnfile: `/usr/local/bin/node`
  stdout: `null`
  stderr: `null`

Error: Workerpool Worker terminated Unexpectedly
    exitCode: `null`
    signalCode: `SIGKILL`
    spawnfile: `/usr/local/bin/node`
    stdout: `null`
    stderr: `null`
  
    at ChildProcess.<anonymous> (node_modules/workerpool/src/WorkerHandler.js:294:13)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:282:12)

And here's my CircleCI config. I've edited a few fields specific to my project and removed some sections that are moot here, as they're for jobs I can't currently run because they're later on in the process.

version: 2.1

orbs:
  aws-cli: circleci/[email protected]
  assume-role: airswap/[email protected]

docker_base: &docker_base
  working_directory: ~/my-funnel  # Edited for privacy
  docker:
    - image: cimg/node:14.18.0
    - image: cimg/openjdk:17.0.1
    - image: amazon/dynamodb-local:1.17.1
      command: -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -inMemory -sharedDb
    - image: roribio16/alpine-sqs:1.2.0

jobs:
  build_and_test:
    <<: *docker_base
    environment:
      APP_ENV: test
      IS_CI: "true"
      # This ID remains here even though I have the Code Climate reporter stuff disabled for now
      CC_TEST_REPORTER_ID: mytestreporterid  # Hex value, redacted for privacy
    steps:
      - checkout
      - run: |
          sudo curl -L https://github.com/remind101/ssm-env/releases/download/v0.0.4/ssm-env -o /usr/local/bin/ssm-env && \
                cd /usr/local/bin && \
                echo 4a5140b04f8b3f84d16a93540daa7bbd ssm-env | md5sum -c && \
                sudo chmod +x ssm-env
      - restore_cache:
          name: Restore Yarn Package Cache
          keys:
            - yarn-packages-{{ checksum "yarn.lock" }}
      - run:
          name: Install Dependencies
          command: yarn install --frozen-lockfile
      - save_cache:
          name: Save Yarn Package Cache
          key: yarn-packages-{{ checksum "yarn.lock" }}
          paths:
            - ~/.cache/yarn
      - run: yarn run lint
      - run: yarn run test # This is where it gives me the Workerpool error
      - run: yarn run package
      - run:
          name: Run Fossa Checks
          command: ./run_fossa.sh

  # A deploy job is defined here, of course, but I'm not getting to the point where I can use it.

workflows:
  no_flow:
    jobs:
      - build_and_test:
          context:
            - fossa
      # There's more here that runs the deploy job; see above comment

I've xdescribed out all the tests and this still happens. Ideas appreciated.


UPDATE: I have this line in my run.ts file:

const tests = child_process.spawn(
    "APP_ENV=test NODE_ENV=test ssm-env --with-decryption node_modules/mocha/bin/mocha --inspect -r ts-node/register -r tsconfig-paths/register --recursive 'test/**/*.spec.ts' --parallel",
    { stdio: "inherit", cwd: "./", shell: true }
);

I removed that --parallel and now all is well. Still puzzled regarding the core issue, but at least this is a work-around.

2

There are 2 best solutions below

0
On

I have had the same problem.

The actual problem is running out of memory.

I have allocated much more memory. My problem is solved.

0
On

This error happens for me whenever I run tests in parallel and there's an error in the test files, outside of the actual test execution (so not caught by Mocha). The "Workerpool Worker terminated Unexpectedly" error is just masking the actual underlying error.

By turning parallel runs off (parallel: false in config or removing the -p flag from the CLI), I can now see the actual error getting thrown. After fixing the error, then I can turn parallel back on.