I have been playing with Google Assistant in Allo quite a lot, and it seems to me to have significant limitations. It cannot cope with what was demonstrated by Google at the launch of the Pixel in Allo--and I don't mean speech vs. text. It already knows where I live and work, presumably because that it part of my Google profile. Here is what I considered to be a simple proposition, especially for a system that supposedly learns from interaction.
Me: How long would it take for me to get to [location]? (the location being my wife's place of work)
GA: Here is your route. It will take 11 minutes.
Me: Please remember that [Name] is my wife.
GA: OK. I will remember that.
Me: Please remember that my wife, [Name], works at [location].
GA: OK. I will remember that.
Me: How long would it take for me to get to my wife's place of work?
GA: Here are some results from the web: [various garbage].
Me: Where does my wife work?
GA: Here are some results from the web: [various garbage].
Me: Where does [Name] work? ('Name' being my wife's name, which I told it specifically to remember)
GA: Here are some results form the web: [various garbage].
As best I can tell, Google Assistant in Allo cannot learn, even when expressly asked to do so. Maybe I've got this wrong. (Please correct me!) Maybe it is not supposed to work this way, but that certainly is not the impression given at the Pixel launch event. Even with basic requests like asking it to save a reminder, Google Now performs better.
I have been using GA in Allo to try and gage if I think better integration as found in the Pixel would be worth the frankly criminal price tag, and the answer I have found so far is a resounding 'No.' That is, not unless GA in the Pixel use an entirely different AI altogether. As far as I can see from here, Google is just blowing smoke.