Building a Docker Image and pushing to AWS using Auto DevOps in GitLab

200 Views Asked by At

I'm a fairly experienced Dev. I have extensive knowledge of Docker, Jenkins, Git, Kubernetes..

I have an existing jenkins job that checks out some stuff from our Git repo, builds a docker image, and pushes it to an AWS ECR. This obviously requires some things to be pre-installed on our jenkins instance (curl, aws cli, docker etc.).

We have access to a GitLab instance administered by an external team. I have owner privileges on our 'group'. I'd like to recreate my jenkins job in GitLab, so (naively) I just did something like this:

stages:          # List of stages for jobs, and their order of execution
  - build

pre-install:     # install various pre-requisite tools
  script:    
    - apt-get update && apt-get -y upgrade && apt-get autoremove && apt-get autoclean
    - apt-get install -y unzip curl jq wget
    - etc.
    - #install docker here etc.
    - #Build etc.
    - push to ecr etc.

So then I learn that GitLab is actually invoking these builds in some sort of container in some sort of kubernetes cluster to which I have no access. I think this is referred to as 'auto-devops'?

So here are some questions:

  1. If I wanted to pre-install some things in the 'runner' that is provided by auto-devops (assuming that's what's happening) how can I do it? Can I even do it with my level of access? That is, I'm assuming the job is running in a docker container, how can I build and specify the image used for that container?
  2. Can I build a docker image with 'auto-devops' (there is a thing called 'Docker In Docker' (DIND) but the docs for it look like they've been written in Swahili?
  3. Can someone give me a paragraph of explanation sort of 'Oh yeah I know where you are, this is what gitlab REALLY is, above and beyond a git repo and jenkins etc.'.

Thank you.

1

There are 1 best solutions below

0
Denys Bondar On

In the Gitlab CI, GitLab runners are executors for your stages. They come in different types - it's described here https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-executors.

You don't need access to the machine on which a runner is running. The build is done depending on the runner type, e.g. if shell - on a remote server (not in a docker), if docker - in a container (usually the image is specified in .gitlab-ci.yml), and so on.

If you want to preinstall something, use before_script (better to use default since before_script is already an obsolete construct.

For example

image: gcc

build:
  stage: build
  before_script:
    - apt update && apt -y install make autoconf
  script:
    - g++ helloworld.cpp -o mybinary

This is DIND

DIND this is a connection to the Docker daemon of the runner host machine from the container (allows for example to build a program in docker container)

Here is a perfect description of what keywords are responsible for https://docs.gitlab.com/ee/ci/yaml/