Track Copying with Tracer


Currently there are a lot of great products for tracking where your content appears on the Web. From veteran services like Copyscape to newcomers like FairShare, there are many compelling choices for tracking your content across the Web.

However, all of these services have a common limitation. Though they are great for picking up where your content is being used on other sites, they don’t and can’t measure how people are interacting with your content on your site. In short, they can’t tell you what people are copying and pasting on your home page nor can they help you ensure that your work is attributed.

But a new service called Tracer, brought to you by Tynt, can do exactly that. A simple JavaScript dropped into any Web page, Tracer can provide you with information about what text/images are being copied on your site and, in some cases, can help you track where it winds and even help ensure that attribution is affixed.

Though the system is in an early beta stage, it holds a great deal of promise and is worth looking at for Webmasters interested in understanding how their visitors are interacting with their work.

Not only can this provide valuable content protection information, but also valuable analytics information about popular articles and keywords on your site, in a way that search terms and just can’t manage.

What Tracer Does

Once you’ve registered for an account and been accepted into the beta, you are then given a single line of JavaScript code to place in your site’s theme. It can be put anywhere in your site’s HTML before the closing body tag. Once there, the script begins to work like other statistics packages, such as Google Analytics, but tracking a very different set of data.


(Note: Stats are from only a portion of one day and there seems to be a bug in my account preventing page views from showing.)

Currently, Tracer tracks four sets of data:

  • Selections: This detects when a user has selected a portion of the page, such as highlighting a phrase or a sentence.
  • Copies: This detects when a user actually copies an element on the page, whether it is a section of text or an image. Also breaks the text copies down between small copies, meaning less than seven words, and large ones.
  • Linkbacks: Tracks where the content is pasted on the Web, more on this in a minute.
  • Page Views: A standard statistic in that it tracks views of the individual pages of your site. Meant to be more of a “sanity check” and to aid in the calculation of some percentages (IE: Percent people who selected text on a page).

Tracer will let you take a look at the specific phrases that were copied, as well as show you break downs of the most popular keywords both in a list format and in a tag cloud.

The other element of the service that many will be excited about is that, when a copy of a section of text is made, Tracer automatically adds an attribution line. For example, if I copy the first paragraph from my previous column about FairShare entering public beta, it appears like this:

Attributor’s individual-targeted FairShare service, previously covered here, has announced today that the service is now open to the public, no beta keys are required.
Read more: “FairShare Enters Public Beta | PlagiarismToday” –

Obviously, this is a raw paste without any formatting on my part. It also allows you to see that the URL has a “#” element added to the end of it, which allows Tracer to monitor incoming links and report those in the “linkback” column of the dashboard.

All in all, if you’re a Webmaster that is interested in seeing how users are interacting with your content, there are a lot of compelling reasons to give Tracer a try.

Potential Uses

When you first log into your Dashboard, you’re greeted with a very glossy animation showing thumbmails of your site floating amid the list if recent copied phrases. Though the animation is attractive and interesting, it is ultimately fairly useless, even though it allows you to click through to a highlighted version of your page showing the passage that was copied.


Instead, the real power of Tracer is in the boring, dry data buried within the body of the dashboard, including the list of top keywords, the tag cloud and the most popular URLs with a table of actions taken.

With this information, there are two different approaches users could take with the service:

  1. Analytics: The most powerful use, for most bloggers and other creators, will likely be the analytics element of it. You can use Tracer to track what keywords are the most popular on your site and what pages are getting the highest level of interaction in terms of selection and copying. This can be used to make decisions about future content to create or choosing which pages to promote.
  2. Copy Protection: Though Tracer doesn’t do anything to prevent copying, meaning it is not a DRM system, it can tell you what posts are being copied the most heavily. This sets those pages up for greater scrutiny, points to sites that are using your content with links, and tells you what images are the most popular among those who are seeking to copy photos. This can help you identify which content needs the most attention and also help you find some uses you might have missed otherwise.

For most, it seems likely that the analytical aspect of the service will be the most meaningful, especially with the service set up as it is. But those who are interested in seeing how their content is being used, it might be compelling as a supplement to other services.

Either way though, there are some serious limitations and problems that need to be weighed before using the service for either purpose, especially considering that some visitors may be unnerved by the way the service works.

Drawbacks and Concerns

Almost certainly long-time readers of this site have already begun to predict many of the problems with Tracer. By being a JavaScript system, it makes itself vulnerable to a series of limitations and issues that may be a deal-breaker for some webmasters.

Consider the following:

  1. No RSS Support: Since Tracer is a JavaScript and most RSS readers do not support the language, it will not and can not work on RSS feeds. For bloggers that likely get much of their readership and interaction through RSS feeds, not to mention face the most serious content scraping issues via RSS, this can be a huge hole in the monitoring.
  2. No Detection Without Attribution: Tracer can only detect uses of copied content if the copy remains wholly intact, complete with attribution. If the attribution is removed or modified, Tracer can’t see the link and will only detect the initial copy with no information about where it went.
  3. User Discomfort: Though Web surfers have grown accustomed to being monitors by traffic packages such as Google Analytics, Tracer also tracks and modifies their copy/paste function. It does so without any warning that I could perceive (possibly could be just my browser) and many users may be put off by this. If you have any thoughts on this, please leave a comment as I’ll be very interested to hear.
  4. Increased Load Times: Though, in my testing, Tracer always seemed to load fast on PT, you are still calling a JavaScript from a remote server. It is easy to see how this could increase the load time of a site, especially if something goes wrong on Tracer’s end.
  5. Lack of Options: Though the feature set is still being fleshed out, there are a lot of options that aren’t available at this time, including the ability to switch off the automatic addition of attribution, changing the definition of a “short” copy, etc. At this time, the only “option” is the ability to change your password.

Though some of these issues may likely be addressed in future updates of the serivce, especially the lack of options and, possibly, the load time issue if Tynt decides to create plugins for Tracer, but the issues with RSS tracking and possible user discomfort will likely remain.

The question is going to be how are Webmasters going to use Tracer and will it justify the potential drawbacks?


Tracer, in my opinion, has a great deal of potential. Though it will never be a replacement for other content tracking tools such as FairShare and Copyscape, it may be a good supplement. Especially compelling is its ability to track image copying in addition to text copying and its ability to detect what is going on the site itself, rather than waiting for copies to show up in various search engines.

I have Tracer installed on Plagiarism Today currently. So feel free to play with it as a visitor and see what you think of it from that perspective (It will also help me test the back end). I may not keep it however, I am not very comfortable with the idea of messing with people’s copy/paste function without clear warning and since it can’t be switched off independently, I’ll likely remove it altogether in a few days until that option is added.

In the end, my thought on the service is that it holds a great deal of promise but, at this time, isn’t really mature enough to widely recommend, especially for bloggers. If you operate a non-RSS Web site or don’t see a lot of RSS scraping, such as with a store, photo gallery, etc. it may be great to get some “on the ground” information about what is going on when people visit your site. For RSS-heavy sites, the information will likely be incomplete and the risks may not justify the novelty of it.

For most, this will be far more useful as an analytic service rather than a copy detection/protection one. However, it is easy to see how, with some work, it could become a nice compliment to existing copy detection products.

If nothing else, Tynt has definitely created a unique product with Tracer and it has a lot of reasons for Webmasters to be hopeful. It may not be up to its potential right now, but it is a service well worth watching.

Want to Reuse or Republish this Content?

If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.

Click Here to Get Permission for Free