Scrambled noise when trying to fill audioBuffer with tarsosdsp audioEvent-Buffer

172 Views Asked by At

I´m developing a simple Beatbox-Application. First I wrote everything in pure Java, then I found the fantastic tarsosdsp framework. But now I came across a problem I can´t solve. Can you help me?

I´m setting up a SilenceDetector - which works great. And then I want to fill a byte[] buffer with the data from the audioEvent in the process-method. There I´m failing... the variable audioBuffer is of the type ByteArrayOutputStream and is reused over runtime. See the relevant code snippet:

    private void setNewMixer(Mixer mixer) throws LineUnavailableException,
UnsupportedAudioFileException {

    if(dispatcher!= null){
        dispatcher.stop();
    }
    currentMixer = mixer;

    //final AudioFormat format = new AudioFormat(sampleRate, frameRate, channel, true, true);
    final DataLine.Info dataLineInfo = new DataLine.Info(TargetDataLine.class, audioFormat);
    final TargetDataLine line;
    line = (TargetDataLine) mixer.getLine(dataLineInfo);
    final int numberOfSamples = bufferSize;
    line.open(audioFormat, numberOfSamples);
    line.start();
    final AudioInputStream stream = new AudioInputStream(line);

    JVMAudioInputStream audioStream = new JVMAudioInputStream(stream);
    // create a new dispatcher
    dispatcher = new AudioDispatcher(audioStream, bufferSize, overlap);

    // add a processor, handle percussion event.
    silenceDetector = new SilenceDetector(threshold,false);

    dispatcher.addAudioProcessor(bufferFiller);
    dispatcher.addAudioProcessor(silenceDetector);
    dispatcher.addAudioProcessor(this);

    // run the dispatcher (on a new thread).
    new Thread(dispatcher,"GunNoiseDetector Thread").start();

}

final AudioProcessor bufferFiller = new AudioProcessor() {

    @Override
    public boolean process(AudioEvent audioEvent) {

        if(isAdjusting){        

                byte[] bb = audioEvent.getByteBuffer().clone();

                try {
                    audioBuffer.write(bb);
                } catch (IOException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }

                System.out.println("current buffer.size():: "+audioBuffer.size());

        } 
        else {          
            if (audioBuffer.size() > 0) {
                try {
                    byte[] ba = audioBuffer.toByteArray();
                    samples.add(ba);
                    System.out.println("stored: "+ba.length);
                    audioBuffer.flush();
                    audioBuffer.close();
                    audioBuffer = new ByteArrayOutputStream();
                } catch (IOException e) {
                    e.printStackTrace();
                }
            }           
        }

        return true;
    }

    @Override
    public void processingFinished() {
        // TODO Auto-generated method stub
    }

}; 

@Override
public boolean process(AudioEvent audioEvent) {
    if(silenceDetector.currentSPL() > threshold){           
        isAdjusting = true;
        lastAction = System.currentTimeMillis();                        
    } 
    else {                  
        isAdjusting = false;            
    }

    return true;

}

Any suggestions?

1

There are 1 best solutions below

0
On

I found the reason why it didn´t work! Like mentioned here: What is the meaning of frame rate in AudioFormat?

For PCM, A-law and μ-law data, a frame is all data that belongs to one sampling intervall. This means that the frame rate is the same as the sample rate.

So my AudioFormat was wrong!