I am using ADC with DMA that captures analog vlaues and generates a callback when transfer is complete. I then, will be transfering the data to a thread to process the data as processing takes some time and I don't want to block callback function.
Buffer is of length 200, I am using the buffer as ping pong buffer and am generating a callback on ADC half complete and full complete events so that thee should be no overlap of data in same buffer.
Below is my current implementation on STM32 with CMSIS RTOS 2
#define BUFFER_SIZE 100
static int16_t buffer[BUFFER_SIZE*2] = {0};
static volatile int16_t *p_buf[2] = {&(buffer[0]), &(buffer[BUFFER_SIZE])};
typedef struct
{
void *addr;
uint32_t len;
} msg_t;
void HAL_ADC_ConvHalfCpltCallback(ADC_HandleTypeDef *hadc)
{
msg_t msg;
msg.addr = (void *)p_buf[0];
msg.len = BUFFER_SIZE;
osMessageQueuePut(queue, &msg, 0, 0);
}
void HAL_ADC_ConvCpltCallback(ADC_HandleTypeDef *hadc)
{
msg_t msg;
msg.addr = (void *)p_buf[1];
msg.len = BUFFER_SIZE;
osMessageQueuePut(queue, &msg, 0, 0);
}
static void process_thread(void *argument)
{
msg_t msg;
queue = osMessageQueueNew(4, sizeof(msg_t), NULL);
while (1)
{
osMessageQueueGet(queue, &msg, 0, osWaitForever);
// process data
}
}
- What is recommended way to transfer to data from half buffer to a thread from callback/ISR using CMSIS RTOS 2?
- Currently queue size is set to 4, for some reason if processing thread takes too much of time; the queue becomes useless as buffer pointer will point to stale oe ongoing data. How to overcome this issue?
If you're stuck with half buffer notifications because of hardware limitation, one possibility is to copy from the half buffer to another buffer from a larger pool.
You'll (probably) eventually need to do this anyways so you don't lose data (as you're experiencing) and so you can bridge the non-cached/cache gap. Your hardware pingpong DMA buffer is going to be necessarily be non-cached and you'll want whatever buffer you do work with, particularly if you're doing filtering or other postprocessing on it, to be cached.
You can wait on a queue in the ISR (same stipulation...timeout must be 0), so have the ISR get a buffer from the "empty" queue, fills it, then puts it in the "filled" queue. Application takes from the "filled" queue, processes, returns to "empty" queue.
If the ISR ever encounters a situation where it can't get an "empty" buffer, you need to decide how to handle that. (skip? halt?) It basically means the application repeatedly ran over its deadlines until the queue emptied. If its a transient load, then you can increase the queue depth and use more buffers to cover the transient load. If it just slowly gets there and can't recover, you need to optimize your application or decide how to gracefully drop data because you can't process data fast enough in general.
You can get away with using ring buffers where only one side modifies the write pointer and the other the read pointer, but if you've got os queues that work across ISRs, it makes the code cleaner and more obvious what's going on.