I been approached a few times recently about how to write low-latency audio latency apps in Android, and I point such queries to OpenSL and say "look how much better it is that it used to be" (provided you use C++ of course)...
But the bottom line is that the Android audio APIs - and that includes OpenSL - still miss the basic trick, which is to be able to offer a programmatic contract that is as simple as this:
App : what sets of audio rates and formats do you support?
(API responds, maybe only 22Khz stereo, for sake of simplicity here)
App asks: how many audio blocks do I need to keep prepared and primed in the queue to guarantee no break-up, at this rate/format, with minimal latency?
(API responds, maybe 2 blocks of 512 sample frames each).
App prepares the first 2 blocks, submits, and gets "block delivery" callbacks; it runs a separate thread to keep internal queues topped-up ready to deliver to the callbacks.
With that sort of contract, every app can run at minimal latency dictated by the underlying audio device (through the driver layer), on any device. Without that sort of contract, every app developer is kept guessing, and has to assume a worst case that works on all devices available for testing. :)
I implemented this scheme nearly a decade ago in the intent Sound System (I had the luxury of designing and implementing a complete audio architecture for mobile devices from scratch!). It is a piece of cake to do, and IMO audio developers for Android are screaming for it. Intent was focused on ultra-low audio latency for games / multimedia and musical instruments...
I should note, Apple have also missed the same trick - they've been able to get away with it however, as the offer a fixed range of hardware....