screens in places where people might look at an ad will all have built in image recognition and eye-tracking.
an algorithm/model will calculate the number of people within view and an acceptable level of eyes on screen per minute (or some other time increment tbd by an industry leading marketing psychologist) depending on the task they are doing.
the algorithm/model can also calculate the local demographic
the short format video content can be easily tweaked to improve engagement. If the racing crash clips aren’t generating enough engagement, then it can try indoor cat clips.
when the eye to screen levels are at or above minimum advertising levels, display an ad that would best match the target demographic that the advertiser set. The ad contents will also match the actions of the local population.
Certainly. Having worked in advertising for 25 years, that’s probably just phase one. Those short videos will eventually be different for each person seeing the screen… and largely A.I. generated with few humans in the loop. In the flip side, people will probably be able to program their smart glasses to hide all that shit. It’s an arms race over our attention already. See: Trudell’s “mined mind.” Or Bo Burnham, for that matter.
Extrapolating a bit, here are the next steps
Certainly. Having worked in advertising for 25 years, that’s probably just phase one. Those short videos will eventually be different for each person seeing the screen… and largely A.I. generated with few humans in the loop. In the flip side, people will probably be able to program their smart glasses to hide all that shit. It’s an arms race over our attention already. See: Trudell’s “mined mind.” Or Bo Burnham, for that matter.