I’m pretty new to the analytics field. My background is in software development, building websites and apps for ad agencies on behalf of big brands. As a developer I was painstaking in my attention to the architecture and implementation of the sites I built. I used the right tools, the best practices, and the smartest patterns. My sites were lean, fast, and cleverly built. They were works of art. They were my beautiful babies.
Take A Little DTM To Ease Your Troubled Mind
Adobe Dynamic Tag Manager was created specifically to address the concerns of your most discriminating developer. With DTM, you don’t need to litter your business logic, interface code, or markup with extra event listeners or data collection calls. If your application code is broken up into discrete modules that each perform a single well-defined function and you want to keep it that way, I salute you, and DTM is here to help you accomplish and maintain your design goals. All you need to do is add a couple of script tags, and you’re done.
Too Good To Be True?
Good web developers are a skeptical bunch. While this is one of our many estimable qualities, it is natural that our virtuous skepticism should lead us to wonder what black magic DTM is performing behind the curtain in order to provide us with this fantastically non-invasive analytics implementation process. Because DTM takes so much work out of the developer’s hands, it raises questions about how that work is being done. What corners are being cut in order to make such a sexy product? What about performance?*What about best practices?*
With these concerns in mind, I want to address some of the common questions and objections that sometimes arise from the concerned citizens of the web development community.
This may be the most common objection we receive. The DTM production library weighs in at around 55KB. To those considering making the jump to DTM, this can feel like a lot of weight to drop on top of an existing page load.
This seems compelling: if you want to be faster, you gotta drop some weight, right? Count those calories. I’ve seen The Biggest Loser. I get it.
To gain some context for this number, let’s start by visiting your website. In fact, we can start by visiting virtually any website. I’m going to go to Google, the famously sparse, prodigiously quick-loading search page. If we pull up our developer tools’ “Network” tab, what do we see?
But what about those images?
Those images add up to 90KB. Let’s consider that number for a moment. 90 kilobytes seems quite a lot for a page so visually sparse. Let’s take a look at the image load for a more visually complex website. The homepage for Tealium (analytics provider and website weight loss advocate; the “Biggest Loser” of tag management providers) is pretty snappy looking. Let’s shuffle our way over yonder.
Whoo boy! 793KB of (gzipped) images! That’s a bunch. You could load DTM fifty times over and still not reach that kind of load.
Okay, maybe DTM is not so big, but still: it’s loaded in the top of the page. While it loads, it blocks everything else on the page from loading.
Since DTM needs to be able to load any kind of analytics tool, and some analytics tools need to be loaded at the top of the page, DTM also needs to load at the top of the page. Therefore, it’s true that DTM is “blocking.”
However, this blocking occurs only the first time DTM is loaded on your site. Every time thereafter, DTM will be loaded from your users’ browser cache, creating a trivial delay in page load times on successive page views (generally <50ms).
But the size of DTM isn’t the only thing that matters. What about the speed and reliability of delivery?
When you’re trying to increase the speed of a page load, overall page weight is a major consideration, but there are other variables to consider.
To understand the lifecycle of an HTTP request, consider the following: if I want to carry a box from one side of the room to the other, there are a few things that will dictate how quickly this can happen. First, I have to walk over to the box from where I am standing, pick up the box, and then carry that box back to the point where I started. How long this takes is determined not only by how heavy the box is, but by how far I am from the box in the first place.
It works the same way with HTTP requests: The request goes from point A, travels to point B, then returns to point A, this time carrying a heavy load (the response). Consequently, the actual geographic location of the request computer in relation to the response computer will affect the time it takes to serve up content.
In order to facilitate faster load times, distribution of content to client computers can happen via content delivery networks (CDNs), which are server centers distributed around population clusters. When I make a request for an image hosted on a CDN, that request will be distributed and returned via the closest geographical server.
DTM provides this functionality right out of the box, and it does this using best-in-industry CDN services. If, however, you wish to use a different CDN or even your own servers, DTM provides you with the flexibility to choose other options.
But there is a definite reason DTM chose early on not to opt for the man-in-the-middle approach that these other companies have jumped into: By putting a processing server between customers and analytics services such as SiteCatalyst or Google Analytics, these companies create a single point of failure for the collection of analytics data; if their service goes down, your data is gone. Since DTM is distributed and cached across all client computers, it is impossible for it to “go down” in this same sense.
DTM is going to put crazy amounts of event listeners all over my page! That will drag down the performance of my UI.
Although DTM can listen for virtually any kind of event occurring within the DOM, by default, it does not do this by attaching events to every single element to which it wants to listen. Rather, it uses a technique for listening for browser events called “event delegation.” This way of listening uses a principle of the browser’s DOM API we will call the When Children Make Noise, Their Parents Hear It principle. What this principal signifies is that, if an event happens on an element that has a parent (i.e. a “child” element), this event does not fire on the child element and then call it a day. No–this event “bubbles” up to the clicked element’s parent element, and then it goes up on to that element’s parent element, and so on, until it reaches the final parent. This is where DTM listens. In this way, DTM “delegates” the task of listening to the parent, and then itself determines how best to deal with the received events.
DTM understands me.
I want to be clear: DTM is not “perfect,” and any technical solution has its upsides and downsides.
That being said, the “upsides” of DTM were targeted and developed with the exacting concerns of people like me–web developers and software architects–in mind. As a result, DTM is a tool that I believe any marketing team and development team can be confident adding to their technical toolbelt.
If you have any questions about DTMs architecture or technical details not covered here, please reach out to us–we would love to hear from you.