Google understands natural talk. They have for years. It is not a simply voice to text application. It understands context. If you say "I'll see you next week", it will use the word "week." If you say "I haven't eaten and I feel weak", it will use the word "weak." Try it for yourself. Use other examples. It can only do this if this if the translator understood context.
By natural talk i mean more human way to ask not saying static commands.... but thats does not matter now since it can now handle that
I'm not sure what you're getting at. API support is clearly there. An API is a function call. It is never contained in the function being called. It is an element of the routine currently running... in this case, voice to text and the Google Search bar. It calls the target function and passes variables to it. Why do you believe that any of this needs to resident in the called function? I take it you don't do any coding.
API is application programming interface, as name states it's a interface such as function, not a call of that function
Ok maybe i didnt said it clear, i mean Search don't have API which let you extend it functionality. Yeah android got speech to text that let you fo voice commands but developer of app is the one who need analyze text in this situation. Voice command in this from don't encourage developer to implement that.
I assume we talking here about one place to issue commands without using other 3rd party software sane as Siri do, in this case Google search bar. Currently what is do (or did in 4.0 and below) is react to specific hard coded commands and triggers so called intent (
Intent | Android Developers) assigned to that command which searches for appropriate application that can handle this intent, for example when you say "navigate to" it will trigger navigation intent with argument containing destination that you said and open right app or ask if there more app that can handle that intent. But this uses API of other applications, which been used even before voice commands in Android, search text does not have API. You can't extend functionality of text search or you replace search bar all together with more commands. It's no diffrent from that Siri is doing now too (thru i bellive some things are hard coded in to Siri it self)
My idea is that Search bar API oreven other apps like that is to find app that can handle text commands (so they can be not only used in voice recognition) like it's doing now with URLs with open file intent for example. So for example Twitter have defind in manifest that it can handle text command intent with filter on prefix "tweet" and it will open Twitter or get special widget for search bar or anything else that let you send tweet that you said. And thats only my example, i know Google could do something a lot better. Simple commands are easy to handle like that but in natural speech where command don't need to be on begining it's hard to filter down what user really want from manifest.
Yea Intent system makes what i said kind of look no different from what it's doing now in technical stant point, but thats how app to app comunication in android works. If android would use app to app communication using normal function calls, app could for example register voice commands to Search Bar or voice recognition all toghther, but android intents are more passive way to doing so by defining it in application manifest which is in fact better and more open.
Now what happened today, they announced ability to use Knowlage Graph which makes search bar make it feel like siri, but still you can't expand functionality of it with our own app. In fact there was a question on Android Firechat session if the new stuff in search bar got any APIs and they said simple "No". But still it's kind of better then Siri, Siri can only view Wolfram Alpha result when new search bar can directly answer question on scientific stuff
