Mostly looks good, and I learned some things about flows that I use personally, so thanks! I only have one substantive comment, see the last paragraph. (nit) In the last paragraph of section 4.1.3, shouldn't only the consumption device that initiated the flow be able to use the code? I.e., isn't there usually a session in the channel used for steps C and G such that the attack fails if those two steps occur on different consumption devices? This seems related to section 6.1.12, but that's about constraining the result of the attack rather than the data in the attack itself, right? (nit, about usability rather than security) Section 6.1.1 mentions the possibility of an attacker using a VPN, but not the possibility of a real user using one. E.g., my phone almost always uses a VPN to my home network to limit what the cell or wifi network can see of my traffic. Using IP address geolocation would be mildly annoying when I'm not near my house, since I'd have to disable the VPN. Not sure if this is common enough to include in the document though. Do many companies have employees use VPNs to route all traffic through the corporate network? Or maybe embassies/consulates to route traffic through the home country? I'd also be curious about how IP geolocation works in remote multi-national places like Svalbard and Antarctica, e.g., if two devices meters apart could appear to be 1000s of km apart if they get their internet from two different countries. I really don't know much about that though, just guessing. (nit) Section 6.1.2 mentions the limitation of the time to enter a code, but not the time to authenticate. I've had cross-device flows timeout on me before while I was typing a long password on a phone keyboard. I'm not sure if this is common enough to be worth mentioning, but encouraging short passwords seems counterproductive. Also, the timeout could encourage users to be less careful about phishing (conventional password phishing or the cross-device types) if they know they have less time. (optional) In section 6.1.5, would it make sense to also mention the possibility of standardizing something like https://en.wikipedia.org/wiki/EICAR_test_file for these QR codes? That way in the flows that don't involve email/SMS/etc. like a TV showing a QR code and a phone scanning it, the QR code could include a standardized string that spam filters could look for. I have no idea how well this would work in practice, and it might be out of scope for this document. Feel free to ignore this idea if it's not helpful. In section 6.2.2, isn't the protocol also vulnerable to much more targeted attacks, where the attacker can predict exactly when a specific user is going to use the protocol? E.g., the whiteboard example from section 3.3.2 could be done in a publicly streamed presentation, and the attacker could see the user about to initiate the authorization flow. If the typical latency between a user initiating the flow and receiving the notification on the second device is 5 seconds, then the attacker could initiate the attack 4 seconds before the victim, and the victim might then approve the attacker's session since that notification arrives first. I assume that's not important for something low-value like video streaming on a TV, but might be worth it to get access to corporate/government files. The device authorization grant protocol seems more secure against that type of attacker, *if* the user is careful enough. (In the public stream of a whiteboard example, an attacker could use the device authorization grant to present unintended files, but they couldn't use it to steal files, right?)