I ditched my Media Temple Grid-Container

Back on February 3rd I mentioned signing up for Media Temple’s MySQL GridContainer. The GridContainer is an add-on to their Grid-Service (gs) shared hosting package. After about a week of horrible performance I cancelled it and went back to the standard Grid-Service.

The container seemed to be plagued with problems from the moment it was activated. My sites exhibited sluggish performance due to high latency, and the container crashed every 8-10 hours. And, there’s apparently no auto-recovery or notification of a crashed container, so it’s totally up to you to monitor the uptime of your site. This, in itself, is a deal-breaker for me.

My trouble with Media Temple actually started before the GridContainer was even activated. I submitted the container request on 1/28, and received a response the same day saying it was in the queue, and I would receive an additional email when it was ready to use… typically within one hour. On 1/31 I contacted them again because I hadn’t received another email from them, and the container wasn’t operational yet. Their response was to apologize for the missing email, and to inform me that the container activation had only taken 15 minutes and had been active since 1/28. This, of course, was a blatant lie! There was never any doubt that this wasn’t true, but I did later confirm with their tech support rep that their log recorded the activation on 1/31. Now, everyone makes mistakes, and this was no big deal to me, until they chose to lie about it. That’s just crappy, and a little surprising for a company that supposedly prides itself on transparency.

In addition to poor performance, the migration to the container actually corrupted an installation of WordPress I had running on a sub-domain. That site / database became totally unusable and had to be deleted. Luckily for me, that installation was for a test site for an upcoming project, and losing it didn’t do that much harm. However, the fact that this happened is totally unforgivable.

The “advanced reporting tools” that Media Temple boasts about being part of the GridContainer were totally useless. The built-in query analyzer was logging hundreds of thousands of (phantom) slow queries, but the report did not identify which database was having the errors.

I called tech support several times during the week, and they offered no explanation as to why the container was crashing. Their best-guess was my site was maxing out the resources of the container. This is actually a typical Media Temple response. They generally blame the user and rarely want to entertain the idea that the problem could be on their end.

It’s worth noting here how the typical Grid-Service works. All sites on a particular cluster run on a shared installation of MySQL server, called the SmartPool. If your site starts using more than it’s allowed resources, it’s flagged, and temporarily moved to a GridContainer. When usage returns to normal the site is moved back into the SmartPool. They do this so other sites around you don’t suffer the “bad neighbor” effect. Optionally you can buy a GridContainer for full-time usage, which is what I was trying to do. If you’re operating in the SmartPool, you can only be moved to a container so many times per month before you get in trouble.

The fact that Media temple was telling me that I was maxing out the resources of my container just didn’t make any sense to me. The fact is, my site was never flagged when it was on the SmartPool (which has half the resources of the container). If I wasn’t maxing out the resources of the SmartPool, how can I possibly be maxing out the resources of the container. To me, this means one of two things:

  1. There is something wrong with my GridContainer, or
  2. The reporting tools on the SmartPool aren’t worth shit, and sites can use far more resources than they’re supposed to and never be flagged.

I tried to convey this logic to the Media Temple tech support reps, but they were unwilling to concede to either of these points.

Ultimately they increased the buffer on my container and at the same time I started using a caching plugin for WordPress. One, or both, of these things seemed to have solve the problem of crashing. However, the container just didn’t live up to it’s hype, and I didn’t think it was worth $20 extra per month, so I cancelled it.

As this post has started to run a little long, I’ll write a followup post with some thoughts on Media Temple in general, and if I plan to stay with them.

11 thoughts on “I ditched my Media Temple Grid-Container”

  1. Can you clarify a little what you mean by

    “The built-in query analyzer was logging hundreds of thousands of (phantom) slow queries”


    Where they not real, slow queries (hence phantom) or were you just unable to find out where they were originating from?

  2. @ Oliver – Well, I’m really not sure. The queries being generated would suddenly spike in both number and latency – logging hundreds of thousands of extra queries. But, there was no corresponding spike in traffic. Media Temple couldn’t offer any explanation for this, and the log was pretty useless.

    Here’s a screenshot from the WP latency tracker.

    These extra (slow) queries seem to come from nowhere. That why I called them, “phantom”.

  3. Hmm, I see… I wonder if the slow query tracking tool would log queries as slow if the database server in general got slowed down? So in essence they’re not really “slow queries” but rather just being executed slowly, and logged as such?

    I had a gridcontainer a while back, but didn’t notive a big enough speed improvement in order to warrant the price. The waves (latency range) maybe got smaller, but the actual watermark was maybe a bit higher even.

    The migration back and forth between smartpool and container was so smooth that I actually got suspicious as to whether the containers are just marketing hype, with more memory allocated to your account, but you’re still essentially on the smartpool;)

    All that said, I think these last couple of months has been very stable and speedy enough.

  4. Ya, I’m not sure on the tracking.

    Right now I’m running a caching plugin. Without it my site would be running horribly slow. The wordpress admin site is painfully slow for me now – almost to the point of being unusable.

    I believe that how well a (gs) account runs depends heavily on which cluster you’re on. Some are slow and some are fast, so it’s just a matter of luck, unfortunately.

    I’m actually debating on asking to be moved to a new cluster, or possibly even upgrading to (dv).

  5. Well they (mt) say there’s no difference between being on various clusters. Yet many people feel the opposite way. I’m on cluster 6 and you’re on cluster 1, so there may indeed be differences. Some speculate it’s because clusters 1 – 4 resolve via DNS to gridserver.net, while clusters 5 and 6 resolve to clusterserver.net, so in conclusion clusters 5, 6 and newer are built on a newer architecture. Could be true.

  6. I didn’t realize 5 and 6 were newer, but that makes sense with something that I’ve noticed… I have a client on cluster 6 and in my tests their site is much faster, and they experience far less latency. Thanks for the information.

    How were you able to tell I’m on cluster 1?

  7. I were able to tell it like this:

    Normally I’d say there shouldn’t be a difference in something like this, but it seems the grid is pretty… ehm… rigidly built, in the sense that I’ve had MySQL version 5.1.26 or something like that, ever since I got them to upgrade me to version 5. So it seems like they can’t easily just deploy updates. So, each grid cluster may be somehow unique, and the older ones are updated on a very slow scale, while the new ones are built from the beginning with newer software and hardware. Makes sense?

    Have a nice weekend =)

  8. Great tip on how to know what cluster a site is on. Thanks.

    The rigidity of the grid actually makes sense. Until a couple months ago I was still on MySQL 4. For years they told me my account couldn’t be upgraded because it was too old. They said I would have to open a new account, manually move everything over, and then close the original account. I waited so long to do it that they eventually came up with a way to upgrade me.

    I also find it very curious that clusters 5 and 6 resolve to clusterserver.net (vs. gridserver.net). Cluster Server (cs) is the name of their (long overdue) next-gen shared hosting package that will replace (gs). I wonder if 5 and 6 actually use some of that technology now?

    You have a nice weekend too :)

  9. Hi Paul,

    I’m the author of the WP Latency Tracker plugin. It’s great to see it in use and helping out.

    I wrote the plugin for exactly the reasons that you outlined here and touched on:

    They generally blame the user and rarely want to entertain the idea that the problem could be on their end.

    I speculated like you that the perpetually upcoming (cs) is simply a working(!) version of (gs). Time will tell if (mt) figures it all out and makes their hosting platform live up to the marketing hype.

    If you have any suggestions for new features to the plugin, please let me know. I plan on changing the flash charts over to javascript, allow zoom in/out controls, better reporting, exporting, sidebar stats, etc.

  10. This is a really useful post, because I am going through the same issues.
    I actually hosts about 30 websites on one of my containers and the database crashes frequently. I am not a MySQL expert, but I see the issue being at the number of open tables. It is hitting the max 1024 of 1024 most of the time and this has only become an issue when I started paying the extra $20 per month for the Database container service.

    I’ve hacked most of the website’s theme files to minimize database queries, but it didn’t really help.

    I want to track down which database is causing the issue, but there is no tool that MediaTemple provides.

    Any suggestions?

  11. same thing happening to me.

    I have three sites on there, all of which are very low traffic, my grid container crashes all the time and I have to manually reboot it. bit of a deal breaker for me

Leave a Comment