Heat Maps on Demand

My attention was caught by this description of a company that provides cheap eyetracking for websites on contract. As the article says, full eyetracking studies, whether you do them yourself or contract a consultant, are quite expensive. The title suggests that they keep the costs down because they are using webcams – but I suspect the real savings aren’t the cheap hardware, it is that they have developed a bank of testers who can use their own computers at home and get paid for viewing websites and sending back the data.
The way the GazeHawk system then works is that a customer submits the URL of a single page or a screenshot. They indicate the number of users they would like to have view the site – 10 is the recommended number – and what task they would like displayed on the screen before the site appears – the default task is “Browse this site as if a friend sent you a link to it”.
It is an interesting service, with a heavy dose of “you get what you pay for” applied. Even if the technology works with an acceptable level of precision, the experimental methodology being applied is shakey. There is an arranged set of users that you intend to draw on month after month. They are participating from their homes, most likely, and the focusing qualities of a more structured experimental setting will be lost – particularly if they are viewing their hundredth site. The users are not led to the site or page in question in a particularly organic way. There is the option to specify a task, which is necessary, but task selection is a significant part of usability test design and no guidance or assistance is offered. In fact the default question suggests a strategy of having the user just look around the page, which really is not a task. And most tasks of actual interest are eliminated by having the user restricted to looking at a single page and not navigating through an entire site.
I’m tempted to try out the service as a website tester myself, to see what the process is like. It is possible that as you get further in these concerns are addressed, and that the site has opted for a simple presentation of fairly minimal functionality – sensible if their target market is marketers who want a quick visualization that justifies ad placement on a particular page. And for that market, this likely is a nice, affordable tool. I’m not convinced it alleviates the need for expensive equipment and consultants entirely though.

2 thoughts on “Heat Maps on Demand

  1. Hi Amanda,
    Thanks for the feedback! You raised some excellent points about our experimental methodology, and I’d like to answer them here briefly. First, it’s probably worth pointing out that this is the first service our company has released, and the target market is people who are interested in optimizing the effects of a landing page. Like you said, there are certainly situations in which this particular tool would be unsuitable — we decided to release it as soon as it was valuable for a few uses, and improve it over time.
    Your concerns about re-using participants are definitely valid; we’re putting some real effort into expanding the size of our tester community in order to address this. We also plan on offering additional features which will allow you to match the testers to your particular audience.
    I respectfully disagree with the claim that losing the “focusing qualities of a more structured experimental setting” represents a disadvantage. In fact, I think that many usability studies will benefit from the less intrusive nature of our approach. Letting the testers participate in an environment that they are used to allows us to minimize the Hawthorne effect – you see more natural behavior when people are no longer being obviously monitored.
    On the other hand, I completely agree that task selection is an important part of usability design, and that there is a large amount of room for improvement over how we handle it currently. Like the ability to conduct multiple-page studies, this is a feature which we decided could wait for subsequent releases.
    At any rate, I really appreciate your insightful feedback. We’d be delighted if you signed up to be a tester, and I hope that you’ll keep an eye on us as we continue to improve GazeHawk.
    Thanks,
    Joe Gershenson
    Co-founder, GazeHawk

  2. Joe – thanks for showing up on my site and for the additional information. If your tool is being constructed with the room for growth that you are describing, I think you may end up with something with a lot broader use down the road. It will be interesting to see how you progress!
    The environment issue is tricky, I agree. I absolutely grant that there are serious downsides to artificial lab settings – I would always recommend throwing away the results from the first task or two in a controlled setting due to the likelihood that the subject is still just getting comfortable. But with a complete lack of control over the environment, there is the potential for a ton of random biasing factors to come into play. This is, of course, realistic, but my instinct then would be to say that you counter the natural environment and its chaos with an increased experimental size set.
    Best of luck – I’ll probably sign up as a tester soon!

Leave a Reply

Your email address will not be published. Required fields are marked *