This Is The FIR Filter Code Generated By The Filter Design Tool. Im Sampling Audio Through The ADC, into the dsPIC, and Out The Dac
Code: Select all
BUFFER_SIZE = 16;
FILTER_ORDER = 8;
input[BUFFER_SIZE];
inext = 0;
input[inext] = ADCBUF0; // Fetch sample from ADC BUFFER
CurrentValue = FIR_Radix (
FILTER_ORDER+1, // Filter
COEFF_B, // coefficients
BUFFER_SIZE, // Input buffer length
input, // Input buffer
inext); // Current sample
inext = (inext+1) & (BUFFER_SIZE-1);
I Do Not Understand the "inext = ...... " line. If I Remove It, My test tone(s) run clean through the ADC>>Code>>DAC, but they are not filtered. If I Leave This Line of Code In, The DAC Just Puts Out Noise. I'd Dont Understand The Logic Behind It. If We Evaluate It:
- inext+1 & BUFFER_SIZE - 1 = input
0001 & 1111 = 0001
0010 & 1110 = 0010
0011 & 1101 = 0001
0100 & 1100 = 0100
0101 & 1011 = 0001
0110 & 1010 = 0010
... & .... = ....
So The Way I See It, Your Indexing The Buffer In A Really Sporatic Way {Index 1, then 2, then 1, then 4, then 1, then 2}
...Am I Missing Something or is this another ME Fault?
Regards