Difference Between Interrupt Drive vs DMA for STM32 I2C

140 Views Asked by At

I looked around the internet but still don't clearly understand the difference between interrupt driven and DMA method when it comes to I2C communication. I am using a Nucleo-L476RG board from STM (https://www.st.com/en/evaluation-tools/nucleo-l476rg.html) and hooked up a ICM20948 IMU from Sparkfun (https://www.sparkfun.com/products/15335) using I2C.

So far I have used HAL_I2C_Mem_Read/Write function to read and write data over I2C to ICM20948. The way I use these functions are as following

while(1){
  HAL_I2C_Mem_Write(Write_N_Bytes_Of_Data - Data[N}) // Waits till all N Bytes of data are written
  HAL_I2C_Mem_Read(Read_M_Bytes_Of_Data -  Data[M]) // Waits till all M Bytes of data are read
  HAL_Delay(10)
}

Now I want to use non blocking methods to read/write data ICM20948 via I2C and I can choose between HAL_I2C_Mem_Read/Write_IT or HAL_I2C_Mem_Read/Write_DMA. Reading online all I can find is that "DMA requires no intervention from CPU" but the way I want to read ICM20948 I really cannot differentiate how Interrupt based method is any different. For example ICM20948 has a Digital Out pin which goes high when new data is available. I can hook that pin up to my Nucleo Board to generate an interrupt. The pseudocode will look something like following

main(){
  while(1){
    if(read_complete_flag){
      Do Something
    }
 }
}

ICM20948_Data_Ready_Pin_IRQ(){// This interrupt routine is kicked off when ICM20948 Data is ready to be read
  read_complete_flag = false;
  HAL_I2C_Mem_Read_IT(Read_M_Bytes_Of_Data - Data[M]) // Kicks off I2C to read M Bytes of Data
}

I2C_Data_Read_Finished_IRQ(){// This interrupt routine is kicked off when I2C read communication is finished
  read_complete_flag = true;
}

This should read data from ICM20948 without blocking the main while loop.

However, a DMA based I2C read will also look similar

main(){
  while(1){
    if(read_complete_flag){
      Do Something
    }
 }
}

ICM20948_Data_Ready_Pin_IRQ(){// This interrupt routine is kicked off when I2C has finished reading M Bytes
  read_complete_flag = false;
  HAL_I2C_Mem_Read_DMA(Read_M_Bytes_Of_Data - Data[M]) // Kicks off I2C to read M Bytes of Data
}

I2C_Data_Read_Finished_IRQ(){// This interrupt routine is kicked off when I2C has finished reading M Bytes using DMA
  read_complete_flag = true;
}

So what exactly is the difference between a DMA and I2C read here? Seems like they both are achieving the same thing and DMA will be more complicated to setup.

1

There are 1 best solutions below

0
Ilya On

This question has little to do with I2C itself and is more about difference in how interrupt-based drivers and DMA-based drivers work.

With interrupt-based driver, your CPU receives interrupt whenever every unit of data is sent or received (in case of I2C or UART, it's an interrupt after every received/sent byte). Which means the CPU is free to do anything else between the interrupts, but these interrupts are pretty frequent, especially if you send long sequences of data at a time. And every time it's a context switch, vector table lookup, interrupt handler that decides what event happened and what to do next (load next byte? read arrived byte? end communication?). A lot of switching, a lot of interrupt code executed again and again. Much better than blocking approach in terms of CPU utilization, but if you have a few peripherals talking at the same time, all this interrupt switching can become more wasteful.

DMA solution takes CPU almost entirely out of the equation. Instead of writing/reading data unit by unit, the CPU simply configures DMA something along the lines of "Hey, DMA, here is memory address from which you need to send data over I2C. Let me know when you're done". With DMA, the CPU doesn't need to come back to it until it's done (sending or receiving), or until there is an error.

In case of I2C, an interrupt-based driver triggers interrupt every byte. A DMA-based driver will trigger only one interrupt when data transfer is finished.


DMA configuration is often developed (in my personal experience) on top of existing blocking driver, because you need to use the same peripheral with all the same configs, you just flip a couple bits in there to give DMA access to it, configure a couple flags in DMA, and you're pretty much done. It feels complicated only the first couple of times, but in the essense DMA configuration is like "here is source memory address, here is destination memory address, here is how many chunks of data to move, every chunk is this large". Behind all its complexity, that's actually about it. There are some bonus bells and whistles such as alternating double buffer for receiving data.

A few notes on DMA:

  • Not worth it for small amounts of data. All that extra logic is unnecessary if you want to send 5 bytes.
  • DMA does NOT have access to the entire memory space. Every DMA peripheral can only access parts of RAM and some, but not all peripherals. If a microcontroller has multiple DMA units, every one of them will be able to access different parts of memory. For example, DMA often has no access to tightly coupled RAM.
  • DMA has no access to cache, and cache is not aware of DMA. If you're not careful with cache policies (for example, write-back policy), you might end up with data inconsistency problems. CPU reads old cached value (because it's not marked as dirty), while DMA received a new value directly into RAM.