Google Apps: Changing Mail Exchange(MX) records


Over the past few months we have received an ever increasing number of support requests for changes to MX records. The majority of these requests are from customers that are wanting to get their mail via Google Apps. You can always open a ticket requesting these changes, however, here are the instructions for those of you who like to get your hands dirty. (don’t worry, if you break it we can fix it)

 To modify your MX records with TotalChoice:

  1. Log in to your cPanel account.
  2. Under Mail options Select the MX Entry icon. A list of your current MX records will appear.
      If using the X2 theme click Mail then Click Modify Mail Exchanger (MX Entry).
  3. Click Change an MX Entry.
  4. In the Change MX for… field, enter ASPMX.L.GOOGLE.COM
     If you wish to add additional MX entries for your domain you will need to set them with a lower priority than the primary MX entry. For example, to add the entry ALT1.ASPMX.L.GOOGLE.COM just set the priority field to 1.  Please be aware that 0 is the highest priority and 10 is the lowest.
  5. Click Change

It’s that simple and you should start receiving your mail via Google Apps in a few minutes.

Please note that the above instructions are specific to Google Apps but the basic steps are the same for any MX record modifications. You will  still need to open a support request if you wish to use any of the Google Apps features that require CNAME changes.

Microsoft’s hostile takeover bid !


Microsoft has offered $50 billion for Yahoo! as it failed to match Google’s success and also as a means to counter Google-DoubleClick deal. I just feel that Microsoft is frustrated because it failed to become the numero uno of the internet. Has Microsoft finally realised that it cannot make it big on its own ? is it because Yahoo and Microsoft have fallen behind Google in the race to capture online advertising dollars.

I am sure the Anti Trust lobby will have a word or two regarding this deal, if it goes through.

What is your take on the whole episode ?

Is there a conspiracy happening


Have you heard of the undersea cables mysteriously being cut? Who or what is at fault?

The count as of today is up to 5. That’s right the latest cable cut affected 1.7m Internet users in UAE.

Internet as well as voice data has been affected.

A total of five cables being operated by two submarine cable operators have been damaged with a fault in each.

These are SeaMeWe-4 (South East Asia-Middle East-Western Europe-4) near Penang, Malaysia, the FLAG Europe-Asia near Alexandria, FLAG near the Dubai coast, FALCON near Bandar Abbas in Iran and SeaMeWe-4, also near Alexandria.

The first cut in the undersea Internet cable occurred on January 23, in the Flag Telcoms FALCON submarine cable which was not reported. This has not been repaired yet and the cause remains unknown, explained Jaishanker.

A major cut affecting the UAE occurred on January 30 in the SeaMeWe-4 (South East Asia-Middle East-Western Europe-4). “This was followed by another cut on February 1 which was on the same cable (FALCON). This affected the du network majorly as connections from the Gulf were severed while there was limited connectivity within the region,” said Khaled Tabbara, executive director, Carrier Relations, du.

He explained that the network was re-routed through Al Khobar in Saudi Arabia and was near normal now.

Almost 90 per cent of Internet traffic is routed through undersea cables and only 10 per cent is done through the satellite.

The experts also suggested that the cause of damage could have been a ship’s anchor that was dragging due to inclement weather conditions in the region during that particular period. “About 60-80 per cent of damages to undersea cable are due to external factors and only 10 per cent on an average can be classified as component failure,” said Tabbara.

Khaleej Times

A ship’s anchor? Really? Five cables? Don’t these ships know where the cables are? Curious minds want to know.

Ebay to ban feedback?


Online auction website, announced that they would ban the negative feedback postings by sellers. This just sounds silly to me. How many times have you had that client on ebay that simply was a bad deal. I have had one or two myself. I think the seller should maintain the right to leave feedback just as the buyers can do.

The feedback system in my view is a very important part of the bidding system on ebay, I guess only time will tell.

Details can be found here

Pesky Kiddies: random .js toolkit


At TCH we always try to keep our ear to the wall regarding trends in the industry of various forms from new service demands through to security.

Last month word had come to us of a new and emerging trend that was resulting in the compromise of tens-of-thousands of web sites hosted in similar environments to that of TCH, no doubt very concerning to us at first glance. Naturally our first course of action was to research this issue as much as possible to ferret out all the facts of who, what and how this issue was propagating itself across so many web hosts (see the referenced articles at the bottom of this blog entry for more details).

After an evening of wall-to-wall red-eyed trolling across industry news & security sites, IRC chat rooms, and forums of all kinds, the real danger of this new trend roared it ugly head; nobody knew exactly who was responsible for these incidents but most importantly and startling is that nobody had any concrete facts on just how exactly systems were being compromised. So with only a few facts in hand while being mindful of the speculation out there, we set out to combat this threat head on.

The first step was to take what we knew, or to put it more bluntly, what we didn’t know to use. This lead us on a vigilant path to review all servers for available updates for both the control panel and operating system along with ensuring they were consistently installed. Once that task had been completed it left us a bit more reassured on the state of our servers.

Next up was a review of what security practices are in place on our servers, although this is not something I can go into specifics about it is sufficient to say that this review was quickly put behind us as we emphasize on multiple layers of server security from the moment a server is configured & prepared for production use by our techs.

Finally came the most important aspect, actively defending ourself against an attack we know little about. This is where our intrusion detection system (IDS) comes into the fold, we took some of the few facts we had which included the naming scheme for the infected .js files and created a pattern based rule for our IDS. This allows our IDS, which sits on the border of our network between our core router and switching hardware, to scrutinize all traffic coming and going across the network looking for any signs of compromise from the “random .js” exploit toolkit. In turn if any matches in traffic are found, our IDS prompts an alert on our management interface which we can then promptly act on to combat the situation.

So far we have not seen any indication both from our IDS or reported web site issues to indicate attempts against TCH servers but by being mindful of the risks while taking proactive measures – we strive to be one step ahead of the curve on this and other emerging threats.