Joomla, DL() & You

I am not going to beat around the bush on this, the last couple of days have been a little hectic here a TCH while working to deal with a series of web application vulnerabilities that are being taken advantage of by attackers. The purpose of this post is to explain a bit about what is going on, how these attacks effect you and what we have done to prevent further abuse.

The first thing we need to understand is what is being attacked; as the post subject implies, it is primarily Joomla being attacked as the software has had a series of 9 vulnerabilities released since the 1st of September of which a number of more in depth attacks have formed around. The intended purpose of most of these attacks is to taint web sites with injected javascript, that code takes advantage of a number of client side browser vulnerabilities that if not patched or stopped by an antivirus can cause further issues for web site visitors.

Now, at a glance you might be thinking that if someone fails to patch web site software then it is there own problem, how does this affect me? That is where the dl() function comes into play, the dl() function is essentially a dynamic loader for php modules or 3rd party extensions. To simplify this a bit, the dl() function when enabled allows anyone to add extensible features onto php, generally these are all well to do features but if someone so desires they can create a dynamic loader module with malicious intent.

The scenario we are looking at is that attackers have gained entry to vulnerable web sites, primarily through joomla then they upload a series of malicious scripts including a dynamic loadable module for php that once enabled through dl() has the ability to inject javascript code into pages. The code usually finds itself placed before the body tags and executes its payload on a visitors first visit to a site, a cookie is then set that expires every 2hours then the payload executes itself again on a new visit.

This attack though had far reaching implications, only affected 4 servers on our network (denver, dantooine, alderaan, chewbacca) of which only about half the sites on the given servers or in some cases less were being tainted by the attack. As alarming as this situation is, we need to stress that no content was actually modified on sites except the joomla sites themselves that were compromised.

The way we have come to deal with this situation is a layered approach, we have first and foremost made increased efforts to identify compromised sites on our servers and suspend/remove them. The next step was to cut off the enabling function of the attack, which is the dl() function. This function was actually something we used to disable on servers for its malicious implications but over time that procedure was phased out in the interest of allowing users to install custom dynamic loadable modules from their home directories such as ioncube. However, now that ioncube is standard server-wide on all servers, there is little in the way of other commonly installed packages that depend on dl(), has even went as far as to declare dl() deprecated as of php 5.3.

With dl() disabled on servers, the effects were immediate and all reports of tainted sites stopped, now when I say stopped I do not just mean that that lightly. We literally sat around all evening bashing the f5 key on our keyboards trying to get the javacode injections to reappear on sites, between myself, Bill and Dick we must have done over 6 hours of combined keyboard kungfu in this effort. It was with great relief that we were not seeing anymore reports or issues ourselves first hand but it was still not quite enough to actually be confident that we had done enough.

We are continuing to be extra vigilant with compromise assessment on the servers to prevent any further malicious content from being injected into sites, in addition to this we have on some servers started to use suPHP as a basis for new php security standards. Essentially, by using suPHP we enforce php code to run as the user who executed it instead of as the web server but it goes beyond that by enforcing strict permissions on content and not allowing anything to run above mode 755 (such as world writable data) and also making sure that executed content is owned by the user. This might seem problematic however since the code is now executing as the user, there is no longer a need for data to be set to mode 777 (world writable) or its ownership set as the web server user, which reduces support issues and vastly increases security. The suPHP changes are something we have only rolled out to about 6 servers so far but the support issues it has generated are minimal for the advantages it provides, in the future we will be looking to roll this change out to more servers on a slow but steady basis.

That is where we are at, if you have any questions or concerns regarding this blog or the topics discussed please feel free to comment or head to the TCH forums for further dialog.

Emergency Server Updates

We have the last 3 days been tracking a local root vulnerability in the Linux kernel, the core element of all Linux operating systems. This vulnerability is unprecedented in scope, effecting Linux versions going as far back as 8 years which prompted extra consideration in how we handle it.

Here at TCH we operate a network that is dominated by Linux, to say we took this matter very seriously would be an understatement. It was decided after evaluating the threat this vulnerability poses to our network, dedicated servers, and shared/reseller clients, that waiting any longer on an upstream update was not reasonable. Originally there was an estimate of Saturday 1900GMT for upstream updates but this fell through prompting us to take action. In addition to a lack of a reliable upstream update for this issue is the fact that this vulnerability is being actively exploited in the wild with publicly available attack code on many security and underground web sites.

At this moment, we are rolling out to all Linux servers on our network an updated kernel version that will close this vulnerability while maintaining version compatibility with future upstream software updates. This effort in retaining version support will allow our dedicated clients in addition to our own support team to resume normal update practices with tools such as ‘yum’ or ‘apt-get’ and not have to worry about conflicting versions against our in-house kernel update.

Please do not be alarmed if you experience an outage temporarily on dedicated, shared or reseller servers, we thank everyone for understanding the urgency of this matter and if you have any questions or comments please feel free to submit a help desk ticket at

UPDATE:Aug 18, 2009
We will be conducting reboots again this evening to push out a revised version of last nights kernel that corrects issues with r1backup agent, local firewall services and the network driver on certain servers. In addition, this new kernel revision is binary compatible with CentOS/RHEL 4 kernels being that it was built off the same kernel source tree as the standard kernels.

Backups, Backups, Backups!

Yup, you guessed it – we are going to talk about backups.

Here at TCH we take backups very seriously and when I say that I do not say it lightly, there is no single more important aspect of our management regime than our backup infrastructure. I am going to explain a bit about the extent to which we go through to protect the data you host with TCH.

The first layer of protection we use is raid 1 mirroring on all our shared, reseller and operations servers (help desk, dns servers etc..), this allows for the servers to maintain an identical copy of the system so that in the event of a disk failure the server can continue operating with no adverse effects. There is a catch here though, that is the fact that software support for raid cards in Linux and even under Windows is severely lacking in the capacity of failure notification, which means when a disk fails there is no industry standard method for alerting someone that there is a problem. At TCH, we have developed an in-house software solution that works with our two preferred raid hardware vendors (AMC 3ware and ARECA raid) to the extent that when there is a disk failure in a raid array it captures information on the failure then sends e-mail alerts to management blackberry pagers and to our help desk ensuring that problems are identified and maintenance is immediately scheduled.

Using raid however reliable it may be, is still not impervious to data loss, which brings us to our next level of protection. All our servers are setup with a spare hard disk and configured to take weekly backups of all user data and server configurations. Although these backups have proven to be extremely reliable they are not always ideal as they can be up to a week old depending on the situation where they are required. We look at these backups as a strictly first-line restore point in the event of a failure that allows us to restore accounts in a fashion that is application compatible with the cPanel interface which makes sure accounts function properly and consistently and overcome the data gaps using our CDP solution below.

Finally, we have our gigabit network enabled continuous data protection (CDP) which runs on absolutely every server that retains client and mission critical data. This is a solution we maintain on network accessible storage (NAS) devices that we have build in house, they contain hardware raid across 16 hard disks with redundant power supplies and a capacity of between 6-13tb of space. The continuous data protection (CDP) is a low-level software run on servers with minimal load impact as it does not read the file system but rather the disk in a raw block-by-block method. This allows CDP to identify differences on the disk quickly and backup only those areas of the disk that have changed since the last backup run (incremental backups). These backups are captured on a 12h schedule of every day, 365 days a year and saved to the NAS devices in a “snapshot” capacity. The snapshots make it possible to recover data as it was 12h ago or 5 days ago or anywhere in between – we save copies of the data in every state from every backup run, we do not overwrite backups!. We can further leverage this backup solution as it is a backup of the servers entire hard disk once you pancake together all the snapshots, so in the event of a catastrophic failure we can take the CDP backup image and restore an entire hard disk in a single swift action. You can also leverage this solution from inside cPanel with the R1Soft Backup feature that allows you to restore from the CDP backup images any data you require just as we would without having to request support, although we are always more than happy to help you with any data recovery needs you may have, so do not hesitate.

I hope you have enjoyed learning a bit more about how we protect the data you host with TCH and understand that there is never any substitute for a well planned and executed backup solution. Before I sign off, let me remind you to take the time and consider the data you store at home or work and ask, do you have a backup solution? If not consider storing some of your more important data on your TCH account so in the event of a failure you can rest assured knowing that TCH has you covered.

Pesky Kiddies: random .js toolkit

At TCH we always try to keep our ear to the wall regarding trends in the industry of various forms from new service demands through to security.

Last month word had come to us of a new and emerging trend that was resulting in the compromise of tens-of-thousands of web sites hosted in similar environments to that of TCH, no doubt very concerning to us at first glance. Naturally our first course of action was to research this issue as much as possible to ferret out all the facts of who, what and how this issue was propagating itself across so many web hosts (see the referenced articles at the bottom of this blog entry for more details).

After an evening of wall-to-wall red-eyed trolling across industry news & security sites, IRC chat rooms, and forums of all kinds, the real danger of this new trend roared it ugly head; nobody knew exactly who was responsible for these incidents but most importantly and startling is that nobody had any concrete facts on just how exactly systems were being compromised. So with only a few facts in hand while being mindful of the speculation out there, we set out to combat this threat head on.

The first step was to take what we knew, or to put it more bluntly, what we didn’t know to use. This lead us on a vigilant path to review all servers for available updates for both the control panel and operating system along with ensuring they were consistently installed. Once that task had been completed it left us a bit more reassured on the state of our servers.

Next up was a review of what security practices are in place on our servers, although this is not something I can go into specifics about it is sufficient to say that this review was quickly put behind us as we emphasize on multiple layers of server security from the moment a server is configured & prepared for production use by our techs.

Finally came the most important aspect, actively defending ourself against an attack we know little about. This is where our intrusion detection system (IDS) comes into the fold, we took some of the few facts we had which included the naming scheme for the infected .js files and created a pattern based rule for our IDS. This allows our IDS, which sits on the border of our network between our core router and switching hardware, to scrutinize all traffic coming and going across the network looking for any signs of compromise from the “random .js” exploit toolkit. In turn if any matches in traffic are found, our IDS prompts an alert on our management interface which we can then promptly act on to combat the situation.

So far we have not seen any indication both from our IDS or reported web site issues to indicate attempts against TCH servers but by being mindful of the risks while taking proactive measures – we strive to be one step ahead of the curve on this and other emerging threats.