Ad Ops 101

9 Ways to Design the Best Ad Ops Team in the World

Recently I was speaking with a friend who’s heading up a new digital publishing organization that’s taking their sales in-house, and they’re shopping for all the usual trappings of ad technology, as well as standing up an Ad Ops team from scratch.

At first I thought, “good luck with that!”, but then after some more serious thought, it occurred to me what a unique opportunity he had to build a world class organization.  After all, so many organizations started their Ops teams so long ago, and have entrenched platforms, and business lines to support that they probably wouldn’t work with today if they didn’t have to.

Starting a new team in this day in age still has all the downsides of inexperience, but all the benefits of learning from everyone else’s mistakes.  After all, how many of us in the Ops community haven’t thought at one time or another, “if I could just blow it all away and start from scratch…”, oh how we’d do things differently.

It got me thinking – what would the best Ad Ops team in the world look like? (more…)

The Display LUMAScape Explained

Ah, the LUMAscape, who in the digital marketing world doesn’t know it as an old friend at this point?

Display LUMAScape

First debuted in 2010 by the ad tech banker Terry Kawaja, the LUMAscape has been through many iterations at this point, adding companies, changing categories and noting acquisitions over the years.  From the very beginning this image was a hit with the digital marketing set as it provided a way to understand a complex industry, as well as a symbol for how difficult it is to work in a space so complex!  The LUMAscape was a great way for ad technology people to explain the growing industry within their own companies in a visual way, as well as understand how new companies were aligned and fit together. Kawaja & team’s image was also a solid way to understand what a whole lot of companies even did, so if you were say, shopping for a data management platform, you could get a quick sense who the four or five companies in the space were.  For Kawaja’s company, LUMA Partners, the LUMAscape was also a great way to show the sheer amount of fragmentation in the industry and possibilities for consolidation through acquisitions, on which his company specializes in advising.

Whatever the motivations, the LUMAscape is an iconic image in the digital marketing industry, and a must-know resource that Kawaja’s company has generously kept up to date for nearly five years now.  But the graphic itself only tells a high-level story and can oversimplify, as the LUMA Partner’s website readily admits, so I thought it could be useful to take this image down one more level and explain some of the nuances and sub-categories within each service.  This article describes what each category coves, what a lot of the companies on the LUMAscape actually do, as well as the differences between key services within specific categories.  For those that are new to the industry, I hope this post not only demystifies this graphic, but gives you a well-rounded sense of how the digital marketing industry functions, and for industry veterans who already know the basics there’s probably still a few things to learn. (more…)

Ad Ops Skills: Writing an Ad Spec

Ad Spec Blueprint

While not the sexiest topic in the world, writing and maintaining an advertising specification document, or ad spec, is among the most important responsibilities of any Ad Ops team. Ad specs define the nitty gritty details for what ad formats, functionalities, and technologies a publisher can and will support as a matter of business, and help streamline communication with agencies, clients, and internal sales teams. Just as important, maintaining a detailed ad spec keeps publisher Ad Ops teams organized when it comes to campaign QA, and consistent in their communications with internal and external teams.

If you’re a large organization, a well thought out ad spec keeps your business scalable, enabling even the most junior team members to respond to detailed technical questions and requests from the sales organization, which means decision making stays fast and distributed, even at high volumes. For small organizations, thinking through ad spec requirements is practically a right of passage in becoming a serious digital publisher, a landmark task that forces your company to move from ad hoc decision making to rational business practices and policies you can use as you grow.

Who Should Write the Ad Spec?

In many cases, the ideal ad spec for the publisher at large is written with competing priorities in mind that balance the wants and needs of the sales and marketing teams with those of the engineering and development groups, not to mention what the Ad Ops team can realistically manage to support at scale. The Ad Ops team should absolutely own the ad spec document, but it’s unwise to make ad spec decisions in a vacuum, without input and advisement from outside teams. Just as it drives the Ad Ops team crazy when other departments make decisions that impact their world without having a seat at the table, Ops ignoring teams with a downstream interest in how the ads impact the site performance in general is a recipe for trouble.

The most productive approach is to define boundaries and a framework that the broad group of stakeholders can agree upon, and then let Ad Ops handle the due diligence on what ad technologies can reliably work within those limits.

Big Picture Stuff

From a technology side, I would submit you start the ad spec process by defining guidelines on latency and uptime – these metrics have the most noticeable impact on the user experience, and chances are, overall pageload time is what your web development and IT group care most about in terms of how ads affect their jobs. Latency means how long it typically takes a 3rd party technology to respond to a request, such as an ad request, and uptime means how often the technology will actually respond to request. These are metrics typically outlined in service level agreements (SLAs) in technology vendor contracts, which Ad Ops won’t get from 3rd party technologies, but can at least ask for as part of their due diligence in approving or certifying 3rd party ad servers or rich media vendors. For example, knowing if the 3rd party technology has geo-located collocation facilities, meaning their servers are physically housed in diverse locations and at the high-speed fiber optic connections of an ISP, should separate the men from the boys so to speak. Internal technology teams can not only help provide acceptable levels of performance, but can also weigh in with the right questions for Ops to ask in their evaluations.

In terms of sales and marketing, clearly outlining all the potential products, their configurations, and how they are represented in the marketplace should be the first step. What ad sizes, formats, and functionalities will customers want and expect? Going through this exercise can help compile an exhaustive list of products, which Ops can then use to define acceptable attributes like file size, expansion limits and directions for rich media units, and what types of creative is supported where, and for what platforms. This is particularly important in today’s environment, when most digital publisher not only support desktop display advertising, but mobile, video, and tablet as well. Perhaps mobile video units are only available for application takeovers, not mobile web, or maybe the file size limits for standard media are set at 40K, but interstitial units can go up to 100K. Different rules for different products not only help the sales organization move signed contracts into campaign execution quickly, they can serve as QA cheat-sheets for the Ops organization in campaign setup.

Getting In the Weeds

Once you cover off on the basics, it’s time for Ops to really dig in, and start testing various configurations to ensure the ad spec works. That means understanding how to protect against large discrepancies that can seriously effect customer invoices, discovering the outlying factors that can break ads on the site depending on specific site section code, and understanding how to keep the company’s advertising in compliance with privacy policies, and industry best practices.

For example, Ad Ops should understand cache-busting requirements for 3rd party technologies, and the intricacies of click tracking to minimize the chance for large discrepancies, and so team members know how to QA, and if necessary, correct problematic ads. For rich media technologies to function, publishers often have to include so-called bridge or gateway files in a local server directory which allows expanding ads to bust iframes, though these techniques may not always work for all technologies for asynchronous ad calls, or ad slots called with JavaScript instead of through an iframe. Ops has to understand where the site code can potentially interfere with successful ad executions and consider those factors as they define the ad spec.

Finally, Ops should keep up to date with industry trends best practices as defined by trade groups like the IAB, DMA, DAA, and NAI to keep the ad spec current. In years past, the IAB started certifying 3rd party tracking mechanisms to comply with a so-called bots & spiders list, which impacts publishers and means that non-human traffic isn’t counted in ad server reporting. More recently, trade groups have defined self-regulatory policies for transparency around user tracking, and acceptable 3rd party opt-out mechanisms. Being aware of these kinds of services and developments may or may not impact the ad spec directly, but can often inform the necessary diligence the Ad Ops group requires to approve new partners, audit existing vendors, and maintain a professional reputation in the marketplace.

Make Your Ad Spec Darwin-Friendly

Like most everything in digital advertising, change is inevitable, and the ad spec is no exception. The best Ad Ops teams think of their ad spec as a living document, and one that must regularly evolve. A few years ago, many publishers didn’t have a mobile business, and certainly didn’t have a tablet business; today however, you’ll find ad specs with detailed sections for each. Similarly, outdated information can be retired off the spec – it may have been all the rage in 1998, but who really sells the 468×60 ad format today? Publishers have to update their ad spec to conform with updates to Flash, contemplate new technologies like ad verification services, and regularly update to keep their ad specs current in the marketplace.

This is a good thing, because it pushes Ad Ops to plan for the future, and stay solution oriented. When faced with a new technology, ad format, or vendor functionality, Ops should ask how to add it to the spec rather than if it should be added. Plan a semi-annual review at minimum of your ad spec, and even better, quarterly review with internal stakeholders. In many cases your Ops team might find that they need to involve other teams to update the ad spec. For example, to enable mobile rich media units, Ops may need to work with IT to update the application code with an SDK, and partner with IT to ensure that the technology functions correctly before rolling out the update. Sales may have feedback that customers are increasingly asking for new creative sizes. Things are constantly in flux, so regular reviews of the ad spec can help keep Ops on the overall company’s roadmap, both on the technology side and product side.

Ad Specs in the Wild

It would likely take an entire book to detail all the decision and elements in a successful and professional ad spec; instead, it’s far easier to look at what others have done for all the technical specifics. Thankfully there are a number of companies that publicly post their digital ad specs, which is incidentally, highly recommended to simplify the communication between internal and external teams.

To list a few organizations in particular, The Washington Post, Yahoo!, AOL, and CBS Interactive all have comprehensive, cross-platform ad specs that are best-in-class. In addition to those companies, the IAB defines industry-wide creative guidelines that can help inform the necessary decisions to make in writing your ad spec.

The Future of Geotargeting is Hyperlocal

This is the fourth article in a four part series on Geotargeting. Click here to read parts one, two, and three

So called hyperlocal geotargeting, particularly on mobile platforms is the real promise of geotargeting in the future.  Hyperlocal is far more granular than just a zip code; it’s as specific as your exact location, within a 10 meter radius.  If you own a smartphone, chances are you’ve already taken advantage of these systems to find a nearby restaurant, get directions while lost, or figure out the best mass transit route from one place to another.  From a mobile perspective, many services and apps depend on hyper-accuracy to work correctly, though the information also provides a huge potential to innovate to the advertising community.  For example, a company might run a campaign that serves a unique offer to someone if they are within a certain distance of their stores.  While likely not all that scalable, it might be particularly appealing for local, brick and mortar businesses.

Hyperlocal Geotargeting Via GPS

Technically speaking, hyperlocal is also likely to be far more reliable than traditional geotargeting on the desktop because unlike the desktop, IP address won’t be the mechanism anymore, the device signal itself will.  What does that mean exactly?  In some cases, geotargeting will leverage a device’s GPS receiver in concert with a customized table of coordinate ranges to identify targetable impressions.  Up until a few years ago, using GPS signals to deliver advertising would have been all but impossible due to the significant latency, up to 30 seconds for a so-called time to first fix (TTFF), which is when a location of the GPS satellite constellation (the physical location of the GPS satellites in orbit above the earth) is finally known and is a result of how often the GPS satellites broadcast a ping.  While generally reliable, 30 seconds is an eternity to ad delivery systems, and hardly a realistic solution to deliver a timely message.

Today however, TTFF is usually only required for non-cellular devices, like standalone GPS systems. For things like smartphones, the GPS coordinates are determined by a process known as ‘assisted GPS’, which speeds up geolocation by referencing a saved copy of the satellite constellation locations known as an almanac. The almanac details the exact locations of every GPS satellite in orbit at regular intervals, as well as the health of the signal. Every day, the cell towers download a fresh copy of the almanac, so instead of needing to acquire a first fix, your smartphone can simply rely on the cell towers to acquire its GPS coordinates in no time at all.

Hyperlocal Geotargeting via Triangulation

In addition to GPS, one concept gaining traction is the notion of signal triangulation by a dedicated 3rd party.  The idea here is that every mobile device has an antenna that not only broadcasts a signal but recognizes other wireless signals like Wi-Fi routers and cell phone towers in addition to the GPS satellite signals. Now, if someone were to read those signals off the device, could identify those other devices, and also knew the physical location of each device, they could use that information to triangulate the mobile device’s exact location, all with incredible accuracy.

If that sounds like science fiction, take a moment to familiarize yourself with a company called Skyhook Wireless, which is doing just that, and has been for years.  They already have millions of wireless signals mapped for virtually every street in the country, and have a response time that is a fraction of GPS, around 1 second.  There’s a very cool video that explains how their process works available on their site.  Their product is in production for a long list of major companies, including many of the major cell carriers.  Google and Microsoft for their part have opted to build their own systems that work on a similar process of triangulating user location based on Wi-Fi signals.  In many ways, the future is now!

Hyperlocal Desktop?

Outside of mobile, there’s a similar thread of innovation happening on the desktop side, though it isn’t nearly as advanced, and still relies on IP address since many desktop systems are directly cabled to their networks and don’t broadcast or receive a wireless signal.  Just this year, computer scientist Yong Wang demonstrated that by using a multi-layered technique combining ping triangulation and traceroutes  with the locations of well-known web landmarks like universities and government offices that locally host their services and publically provide their physical addresses, he could accurately map an IP address within 700m versus the 34km that traditional traceroute triangulation produces.  While this method isn’t in production as of yet, it could be soon, since Wang’s process is quite similar to the existing methodology, but at a much higher frequency.

Limitations of IP-Based Geolocation

This is the third article in a four part series on Geotargeting. Read parts one and two.

Despite the complexity and scientific approach of IP based geolocation identification, there are well known limitations and inaccuracies with the current methodology.  While geolocation data is usually extremely accurate down to the state or city level, as services demand more granular data, many of the current geolocation services start to break down.  The loss in overall coverage is quite small, but accuracy can be another story.

Server Location vs. Machine Location

One of the more challenging aspects of IP based geolocation is that often times, geolocation services end up using the location of the server on which that IP is accessing the internet, not necessarily the location of the end user’s machine. So however impressive you may have found the diagram in the last article on IP triangulation, the method may end up targeting the wrong location.  The classic example of this known within Ad Ops circles was AOL dial up service, which in its heyday represented a large share of internet users.  AOL’s servers were all physically located near its headquarters in Virginia, so every IP address hosted by AOL, was often shown to be located in Virginia, even though users were spread throughout the country.  Today, this is much less of a problem because most consumers have a high speed connection serviced by a locally hosted ISP, but it exposed the problem in a big way at the time.

That said, local ISPs network routers, while usually quite close, are frequently in different zip codes, so while coverage remains high for most IPs at a granular level, accuracy can be less reliable. When researching this article from my location in New York City, most services were more than 7 miles off my actual physical location, perhaps a small difference in much of the country, but an enormous gulf in as dense an area as Manhattan.  Every service however was correct about my location at a country, state, and city level.  You can check your own location on MaxMind’s demo page, which incidentally, was one of the more accurate services. (more…)