Nanoloop is a sequencer, so doesn’t require a small buffer size.

Pete Cole also argues for the native API approach, as employed by mikrosonic above:
http://sseyod.blogspot.com/2011/12/android-high-performance-audio-how-to.html?spref=tw

My guess is that that’s the same thing that Image-Line is doing.

The Linux kernel is not the issue; it’s two things — the Hardware Abstraction Layer around the audio devices and then the APIs used to access that HAL. Java also has nothing to do with it; you can write a low-latency audio application in Java or any other language, if you like, and on Android you can write your DSP code as native code via the NDK. That’s what we’re doing with libpd; it’s running as entirely native code inside a Java application. (And, as it happens, that’s what we’re doing on desktop, where with Java and JACK we can get fantastic performance that’s a far cry from the Android trainwreck we’re describing here.) Now, once it has to go to or from the audio interface, then you run into these other issues .

Latency is always simply a function, first and foremost, of whether you can consistently get audio out via as small a buffer as possible. You’re just trying to dump data to the audio output as quickly as possible.

However, assuming we’re targeting 2.3 or later, the lack of data in the article above on using that native API as Intermorphic and mikrosonic suggest mean that this is old news, and not generally useful. The bug report thread everybody points to actually refers to stuff that isn’t true, to 1.x APIs.

So, yes, I want to see the data on those new APIs; the information above seems like a waste of time. I still don’t think it’s going to be as good as iOS, but at least it’d be fresh, verified data.

เหตุผลเดียวกันนี้ ยัง Apply กับ Mac vs Windows หรือ Linux ในงาน Music Production อีกด้วย