Entrepreneur or Leader or Both?

February 21st, 2010

Bluehost/Hostmonster/Fastdomain started very small. We never took a dime of investment capital, never had any debt, and were happy to wait the 18 months it took before we got our first paycheck.

This may sound quite risky to many of you out there, but for me it wasn’t at all. I could see in my mind completely the plan for success. I defined success differently back then and I honestly never planned for this business to grow this large, but I had a clear picture of exactly the steps required to succeed. In my mind it was just a matter of doing it. I never thought there was a chance that it wouldn’t succeed. This is the how an entrepreneur thinks. They solve problems, take risks, work hard, and have an insatiable desire to succeed.

Now that we have grown into a much larger company we have somewhat outgrown the stage where only an entrepreneur is needed. Now we need an entrepreneur and a great leader.

This got me thinking what the difference is. I wanted to share with you what I believe the difference is. Here is the Matt Heaton definition of each.

Successful Entrepreneur – A person who has the ability to recognize a need/deficiency, ability to differentiate between a useful need and an idea that can be a successful business, design a solution, use his/her drive and ambition to implement the solution , and then profit from that solution to the desired level of the entrepreneur.

Most successful entrepreneurs follow this path reasonably close in my opinion. The unsuccessful ones are nearly identical in almost every way to the very successful entrepreneurs except for two missing attributes. If they lack the knowledge to implement their own ideas themselves they often fail. This happens because sometimes if you rely on someone else, or outside help the ideas tend to change and the vision that they clearly saw at the beginning of their plan begins to fall apart. The second area is intelligence/education. If you have all the ambition in the world but don’t understand finances or your product or the marketplace you will almost certainly fail. I am not talking about a degree or any specific piece of paper. I simply mean that you have to be willing to put the time in to really understand the specifics of the problem you are trying to solve. If you do that you will succeed.

Successful Leader – A person who has the ability to recognize a need/deficiency, ability to differentiate between the most important goals from those that can and should wait, ability to design a solution that can be implemented with the resources he/she has available, ability to obtain currently unavailable resources to achieve the outlined goal, use his/her drive and ambition to implement the goal using the resources and people around him, and then show others how the goal solved the predetermined problem and then clearly state what the next goal is and why.

In essence, for me the main difference between a great entrepreneur and a great leader is how you achieve success. An entrepreneur literally wills his/her idea to come to life and succeed. It all comes from drive and ambition from within themselves. A great leader does the same thing through the people around them. Its easy to make myself be great (Always humble I know :) ), its MUCH harder to make those around you be great as well.

To be a successful entrepreneur from my point of view is a piece of cake. Its in my DNA, it’s who I am. To be a successful leader is much harder for me. I very much rely on my own abilities to solve many problems at hand. I am often unwilling to listen to others ideas or to give freedom to implement those ideas because they don’t fit within my vision for the business. Sometimes that can be a good thing if I feel the person would make a big mistake, but I have tried very hard to surround myself with intelligent, competent people. If I can’t trust them to do their jobs, then when they fail at those jobs it’s no ones fault but my own.

I’m still deciding if I’m the right person to lead our company in the future. I tend to lead more with a whip in hand then with a kind word and encouragement. Its time for me to decide if I’m willing to bend with the reality of having a large company or break in half from lack of flexibility required to lead a large company. Whatever path I choose I’ll make sure it the best thing for the company, for our customers, and for me.

Matt Heaton / President Bluehost.com / Hostmonster.com / Fastdomain.com

Increase Website Speed & Cut Bandwidth Costs for FREE!

February 6th, 2010

Several months back I took my wife and five children on a 7 day Disney cruise (I *HIGHLY* recommend it by the way, and I’m a hard person to please :) ). Whenever I go on vacation the first thing I take care of is making sure that I have internet access. Thankfully, I was able to use my Verizon MiFi card while in most ports, but while at sea I had to use Disney’s on board satellite internet. It was extremely slow.

This got me thinking of how I could best increase the internet speed for our clients that have slow internet connections at no cost to them. I decided on using mod_deflate. I had used mod_gzip in the past (Almost 10 years ago) so I was familiar with how it all worked and it was simple to set up. Mod_deflate basically takes certain types of files and compresses them at the server level and then sends those smaller files to you. Images, zip files, etc don’t compress well (And so we don’t compress these, but HTML files, javascript files, css files, etc compress very well. Often we see 80% compression levels on those type of files. These files are then decompressed on the client side automatically and used. This is all transparent to the user, except that download/page load times are much faster for the user (10-25% faster).

However, there is a severe problem with using mod_deflate that no one seems to have solved. Using mod_deflate requires *significant* CPU usage on the server to use. The problem is that often CPU resources are maxed out. If you use mod_deflate while the CPU(s) are maxed out then the servers become even slower and all websites on the server will appear very very sluggish. For this reason most web hosting companies don’t use mod_deflate, and for good reason.

However, at Bluehost/Hostmonster we have a great solution for this problem! Some of you may have read a previous blog post where I mention that Bluehost/Hostmonster have a proprietary CPU protection system. Using, this system we track CPU usage in realtime. We then wrote a patch to the Apache web server (This is what serves your websites to your browser) that interfaces with our CPU protection system. This patch checks our CPU usage twice a second and if CPU usage exceeds a certain threshold then we temporarily suspend mod_deflate. When there are unused CPU cycles then it reenables mod_deflate. By implementing it this way we get all the benefits of mod_deflate with none of the detriments of excessive cpu usage causing slowdowns.

The first full day we ran this it lowered our bandwidth consumption about 600 Mbits a second (With very conservative settings). When we run it with aggressive compression we save over 1 Gig/s of sustained bandwidth. That is considerable savings/speedup for something that took about 4 days to develop, test, and deploy!

Now, next time our family goes on a cruise Bluehost/Hostmoner sites will appear much faster!

Matt Heaton / Hosting by Bluehost.com

Bad Apple or Great Kid?

January 31st, 2010

When I was young I was extremely hyperactive. It got so bad at one point that in the 3rd grade I was allowed to just “leave” class whenever I wanted to have my own personal recess. The school did this because my poor teacher was so distraught with my behavior that she literally couldn’t handle me and so I was allowed to roam the playground until my “energy ran out” – which of course never happened.

Looking back, I feel really bad for what I put all my teachers through. I really was a wild kid :)

I remember in the first grade working through all the first grade and second grade math books by the end of September. They wouldn’t let me do the 3rd grade math books because they didn’t want to me get ahead (I always thought that was ridiculous by the way). After that I started getting “S”s on most of my report cards. S=satisfactory. My Mom wanted “O”s for ‘outstanding’. Later, I started getting “N”s on my report cards. N=Needs improvement. At this point my Mom started getting worried. She thought that because I was misbehaving so much that I wasn’t learning the material, but that wasn’t the case.

The problem wasn’t that I didn’t know the material, the problem was that once I learned something (Or thought I did) then I HAD to move on to something else. When I say that I “HAD” to move on, its the truth. I literally couldn’t bring myself to do “busy work” for a concept that I already understood just to satisfy the teacher. Often times homework didn’t get done because I KNEW that I understood the concept. It was a complete and utter waste of time in my mind, and I had new exciting things that I was busy working on. I always craved doing something new.

High school was the same. I remember getting a D+ in chemistry one semester (Worst grade in highschool), but when it came time to take the ACT for college entrance I scored a 35 (Near perfect score) on the science portion, which happened to be Chemistry that year. Things just moved a little too slow in school for me, and I am grateful for it now because it gave me a lot of free time to learn about computer hardware and software development.

One of the things I love so much about Bluehost and Hostmonster is that I get to pick and choose new things that interest me, that are challenging, and that will benefit our customer base. In other words, I have an environment where I can succeed.

I could just have easily been written off as one of those goof off kids with poor grades, or presented with serious challenges and given the freedom to experiment and learn and do things that others haven’t yet tried. I’m so happy that I was given a chance to show what I could do later in life.

Everyone in this world has something to offer. The sooner you find out what that is the sooner you will find happiness. Don’t let other people tell you what will make you happy. Instead, look from within and see what it is that drives you, and what you need and then go in that direction.

Your happiness doesn’t require the understanding and comprehension of those around you, it only requires understanding by yourself. Find out what that is and then happiness will be yours.

Matt Heaton / Bluehost.com

Bluehost’s “Secret Numbers”

January 27th, 2010

January 2010 has seen some good growth for our hosting platform. I am usually pretty secretive about our company “numbers”, but have decided to spill the beans tonight on my blog. Below are some interesting stats from our various hosting brands.

Total Domains Hosted : 1.9+ million domains
Total Paying Hosting Customers: More than 525,000
Total Servers: 850+ (ALWAYS rotating out older servers)
Total Sales/Billing/Support Requests Per Day: Approximately 5,000
Number of new customers (not domains) added each day (Mon-Fri): 800+
Number of new customers (not domains) added each day (Sat, Sun): 500+
Number of new domains added each month: 50,000 – 70,000
Total Bandwidth Capacity: 20 Gigabits/Second (100% ours, not shared in ANY way)
Average Hold Time For Support: 19 seconds
Number of Employees: 240+
Registrar For Domains: Fastdomain Inc (Sister company that “sells” domains to Bluehost/Hostmonster)
Outsourced services: NONE!!!!!!!
Revenue: _____ (Some things really do need to be kept private)
Profit: _____ (Some things really do need to be kept private)

Bluehost/Hostmonster/Fastdomain have been wildly successful. I’m so grateful to have been part of this incredible venture. There was and is an ENORMOUS amount of effort put into making our products the best that we know how to make it. Add to that a lot of luck and we get Bluehost and Hostmonster.

Thank you so much to all our loyal customers that tell all your friends to sign up! The vast majority of all our sales come from non affiliate related word of mouth recommendations. That doesn’t happen unless our customers think we are doing a pretty good job. We promise to try our hardest to improve the things that are “good” that should be “great”, and to add the features that you need that no other company will bother to add. That is our promise to you!

Thanks again.

Matt Heaton / Bluehost.com

Linux CPU Scheduler (The biggest problem you never knew you had!)

January 16th, 2010

This is perhaps the least sexy topic I’ve ever written about :) The linux cpu scheduler is an extremely important part of how linux works. The CFS scheduler (Completely fair scheduler) has been a part of linux for a couple of years. The purpose of the scheduler is to look at tasks (processes and threads) and assign them a processor or cpu core to run on and to make sure that all the processes that need run time get an equal and fair share of processing time. It is also responsible for context switching (migrating tasks from one cpu/core to another or switching out processes that don’t need anymore run time). This helps to balance processes and make better use of cpu cache by being “smart” about where to put queued and running processes.

It all sounds simple enough, but there are HUGE problems with the design of CFS in my opinion. I’m getting in dangerous territory here because I’m about to tear apart something that was designed by people that are much smarter than myself. However, I have something that most kernel developers don’t have access to – a huge and unbelievably busy network. Our network receives more than a trillion (Yes with a T) hits every quarter. We receive more than 100 million email every day. We send out more than 25 million email each day. We now have more than 5 petabytes of storage. In short, I have one of the best testbeds on the planet for finding deficiencies in an operating system.

Enough background, lets get to why I think CFS is “broken”. As the number of processes increases CFS is disproportionally slower and slower until almost no work (CPU processing) gets done. There are many tunables to modify how CFS behaves but the premise is the same. CFS is based on the incorrect (In my opinion) basis that all processes are always “equal”. I can easily create enough processes on a production server that CFS will completely consume almost all the cpu cycles just trying to schedule the processes to run without giving the processes almost any time to actually run.

Think of it like this – Lets assume that for every process to run it takes .1% of the cpu to “schedule” a process to run, and then it takes X % of cpu to run the program. But what if you have 900 processes running and each one takes .1% of the cpu for scheduling. Now you only have 10% of the cpu remaining in which to run your software. In reality I think its much worse than this example. After about 1500 concurrent processes CFS completely starts to fall apart on our servers.

The worst part about this is that the only way you can really tell this is happening is to measure the process quantum (The time slice that userspace programs get of a cpu/core). How many of you know how to measure the average process quantum of the scheduler – That’s what I thought :) If you add up all the “quantum times” during a 1 second period and look at the difference you will see how much CPU the kernel is taking to service those requests. On a desktop system I get about 95% of a CPU for running my software. On our busiest servers I get about 70% of our available CPU time for actually running our software. The rest is eaten up by the inefficient scheduler. If you feel compelled to evaluate the process quantum time you can enable sched_debug in the kernel and check out its output. It’s actually pretty good data for those nerdy enough to read it.

Its been near impossible to prove my calculations over the last several months, but after many long nights I now feel very comfortable in saying that CFS truly is a broken design. It may be a good design for a desktop, and admittedly the kernel guys have made low latency desktops a priority but still… You do have to have some upper bound limit on how many processes can be running and how many new processes can be started over a given period of time, but this limit should be MUCH higher than 1500-2000. I would say it needs to be somewhere in the 10,000 range to really be effective with hardware that will be coming out in the next 6-18 months. If linux wants to scale efficiently to 16,32,64 cores then the scheduler needs some serious work.

How do we fix it? Well, we actually have a “process start throttler” kernel patch that evens out the start times of processes that gives predictable behavior to the scheduler, but it doesn’t solve the issue of the scheduler simply not scaling. It actually gives us a pretty substantial gain in speed and more importantly it stops a single user that launches a ton of processes at once from impacting the speed and stability of everyone else on the system. This is pretty complex to explain, but its actually being tested on live servers starting today, but that is a blog entry for another day.

Matt Heaton

Interesting Iphone Observations

December 5th, 2009

As many of you know I spend a good amount of my life hacking away at the linux kernel and our hosting environment trying to make things smoother. Our new cpu controller, memory controller, and process controller (Officially coming out in about 10 days – YEAH!) make a HUGE positive impact on the stability of our systems.

One of the things you probably didn’t know was that our cpu controller had a big impact on the amount of power that we use. I never even considered that our CPU controller would lower our server power usage, but it did. In fact it lowered our total power usage by about 7% (Our WHOLE datacenter!). Not too bad for a software product that was never intended to lower power costs at all. Now that I see the benefits we are actively trying to make it reduce even more power. I think it is reasonable to expect that I can get it to about 9% savings.

Now what does this have to do with Apple’s Iphone? Everything in my opinion. Apple’s iphone has two huge problems that drive me crazy. First, I don’t think the battery life is very good and second there is VERY little cpu and memory protection for apps on the iphone (ESPECIALLY IF YOU USE A JAILBROKEN IPHONE!!) I use my phone a LOT as a phone, a TV (Slingbox), and for all kinds of other internet goodness.

So like any self respecting geek I wondered what would happen if I could apply the principles of our cpu and memory controller to the Iphone environment. So I started testing the Iphone (That means trying to break it and make it die a painful software death) and I found something interesting. Just like our servers, single apps on the iphone burst to huge amounts of cpu usage (Near or at 100% usage) and then fall almost immediately back to 1-3% usage.

This didn’t surprise me at all as the iphone is built on the foundations of BSD Unix, and this is exactly how a stock BSD, Windows, OSX, Linux installation behaves. Now this type of 3% cpu usage spiking to 100% and then back to 3% over and over and over is BAD for battery life. You know what else it is bad for? You guessed it (Or maybe not :) ). Its REALLY bad for stability on the iphone.

What does this mean for you? Well, if you have a jail broken iphone it means there is a really good chance that many of the apps you install compete directly for resources with the phone app and phone capabilities. Uh oh – Dropped calls. Guess who gets blamed for problems when that happens? AT&T. Its pretty hard for the average consumer to determine if AT&T is to blame or spiking software on their iphone. Don’t get me wrong. There is PLENTY of blame for AT&T. They are one of my least favorite companies on the planet (right behind Delta, and Comcast). There are also significant challenges to segregation of memory and cpu resources for non jail broken phones as well so don’t think you non-jail breakers are out of the woods either.

Its amazing to me that an outfit that I respect as much as Apple hasn’t gotten around to solving this problem. In their defense it IS a very very difficult problem to solve. In fact, the primary reason for writing that whole notification system instead of simply allowing background apps to run was to save battery life and to segregate resources.

All of this is tempting me greatly to port our cpu controller and memory controller over to the iphone. It would make it almost impossible for rogue applications to A) Eat up your battery using inordinate amounts of cpu resources, B) Crash your iphone by eating too much memory or CPU, C) Cause your phone to drop calls because of other 3rd party apps.

What do you think? Would you pay $3 for this software for your iphone (Gotta keep the lights on :) ) or am I making something out of nothing and you don’t see any real problem with your iphone?

Matt Heaton / President Bluehost.com

Life Is What You Make Of It!

November 11th, 2009

I have had more than my fair share of interesting experiences in my life – mostly self inflicted :) . One thing that I have learned is that life is what YOU make of it. You can’t control all the events in your life, but you can control your willingness to participate.

Since I have written almost exclusively about technical aspects of hosting for the last several entries I thought it would be fun to share a few strange personal experiences that have happened to me over the years. I won’t go into a ton of detail, but everything listed below is 100% true.

* I was hit by a bus while riding a bike in Taiwan – TWICE!

* I have sat on the shoulders of a 7 foot tall African Chief in Kenya.

* I have been held at gunpoint by South Korean military police.

* I have had the FBI and local police raid my house and confiscate all my computer equipment :) (Hey, I was still a minor!)

* I was bitten by a monkey while living in Saudi Arabia (Jeddah).

* I was beaten by a 10 year old girl at the table tennis US Open (Yes, there really is a US Open for Ping Pong!)

* I’ve spent 8 hours in the Disneyland police station :)


My point is that life has all kinds of interesting experiences in store for us, but we have to be willing to stretch ourselves. Next time you ask yourself “Why would I want to do that?” instead ask yourself “Why not?” Life is short, make the most of it!

Matt Heaton / President Bluehost.com

The argument AGAINST virtualization

October 25th, 2009

It seems a day doesn’t go by that I don’t see another article written on the virtues of virtualization. For those that don’t know what virtualization means it is technology that allows you to run multiple instances of an operating system on a single server or on top of a system of clustered servers. Virtualization has been around forever. This is the method that many mainframes used to deploy software, but virtualization became popular for desktops/workstations in1998/99 when VMware was first released.

While virtualization techniques have improved dramatically in the last 10 years (Think 3D support, para-virtualization for direct access to the hardware layer, etc) there is a fundamental problem with the whole concept of virtualization that no one ever talks about. That is the issue of the HUGE overhead that comes along with having multiple instances of an operating system running at the same time for software that doesn’t doesn’t NEED to be run on different machines. This is best illustrated by an example.

Lets assume there are 100 units of CPU processing power available on 2 servers that are configured identically (From a hardware perspective), and that 10% of the system resources are dedicated to servicing the operating system running on these servers. **10% is a very very low number in my opinion, but I will use it to be on the safe side of this argument.** Lets assume that a given user/customer consumes 2% or 2 units of system resources each.

Server A – 100 Units of CPU
10 Units used for OS (Windows, Linux, OSX, etc)
90 Units for users/customers –

Server A can accommodate 45 users.

Server B – 100 Units of CPU
10 Units used for OS (Windows, Linux, OSX, etc)
2 Units for users/customers

12 Units per customer or a little more than 8 users per server.

45 users vs 8 users… Hmm…. Now I have taken several liberties with regard to my assumptions. To be fair, there are numerous techniques used to speed up the process of virtual systems that I have not explained but I think you get my point. Here are some important reasons to USE virtualization. Listed below are several cases where virtualization may be the best or only option you have.

* If you need to run disparate instances of OSs on the same hardware IE – OSX and Windows on the same machine at the same time.
* Testing purposes – If you want to set up an alpha, beta, live setup on the same server.
* Security reasons – Memory & cpu are segregated pretty well on most virtualized environment (Disk I/O not so much – in fact I think its terrible on vmware, parallels, xen, etc). A lot of progress has been made in this area, but its not even close on a fully loaded machine in my opinion and based on extensive i/o testing.
* Need to migrate virtual machines on the fly – This is a great feature that many vm products support.
* If you OFTEN have a need to dynamically change resources for different OSes then a virtual product may be good for you as you can change cpu/memory/disk resources easily and in many cases make these changes on the fly.

Here are some of the reasons against using a virtualized product.

* Overhead of all the multiple OS installs to deal with before you even run a single program.
* You have to do security updates/maintenance for every OS install you have installed. Just thinking about 10 instances of Windows Anything running on a server is enough to make any botnet operator salivate.

Virtualization has its place. Its a super important piece of technology, but it is being applied in many areas where efficiency is scrapped for convenience. I revile the idea of convenience over efficiency for a long term strategy, yet many companies are doing just that. If you are a company deploying huge numbers of virtual machines to control resources CPU/MEMORY/DISK then you are just throwing money away. In an industry where every penny counts why give your competition any advantage?

Shared Hosting CPU Protection Is Here!!!

July 26th, 2009

I have been promising CPU protection for a long time and its finally ready.  It has been running on several servers during our live beta testing and it has proven itself to be extremely successful.  For those that need the brief rundown again this is what this feature will provide.

1) Guaranteed CPU resources for every user on every server.
2) Protection from heavy users.  No longer can a single user, or a small group of users consume inordinate amount of resources causing your own site to fail to load or load slowly (NO OTHER SHARED HOST ON THE PLANET CAN SAY THIS WITH ANY VALIDITY – WE ARE THE ONLY HOST THAT HAS THIS TECHNOLOGY AT THIS TIME!)
3) Extremely sensitive CPU resource allocation – CPU time is calculated in 200 millisecond increments.  This means our servers will always respond quickly and users won’t be exposed to slowness due to sudden bursts of CPU usage.
4) CPU Statistics – We can now tell you exactly how much CPU you have been using each 24 hour period.  More importantly, we can tell you how often your domain was throttled or capped if your site experiences “bursty” CPU usage.  No more guessing on what you are using, now we will tell you exactly.
5) Users can see IN REAL TIME if their account is being throttled for any reason.
6) Users can see IN REAL TIME exactly what processes they are running that put them over the CPU limit.
7) NO MORE CPU QUOTA EXCEEDED ERRORS EVER!!!! (Starting on Tues July 29th 2009)  We will be completely removing the code that bans users for CPU overages!!
8) Processes will no longer ever be killed or stopped because of too many cpu resources.  Instead, your site will simply bump up against any cpu limits that we put in place.  This will work just like a VPS or dedicated server, but without the high cost!
9) Now able to sell “dedicated” cpu resources (Actually its not in our shopping cart yet, but the technology is there so give us a couple of weeks to build out the site for it).  Now you can purchase an entire core of CPU and get speeds FAR FASTER than a dedicated server for 30-40% less.
10) Ability to purchase instant CPU upgrades.  If you decide you need double the CPU that you currently use we will be able to do that for you without you having to deal with the maintenance and headache of a VPS or dedicated server.  FINALLY!
11) ALWAYS have some idle CPU resources available to service incoming requests.  We will never allow the general pool of CPU usage to become saturated so that no resources are available to service requests.  Again, no other shared hosting service in the world that I know of has this technology.
12) FREE – FREE – FREE – There is no cost at all for this feature.  The only cost would be for those users that want higher dedicated cpu resources.  We will most likely offer 3 different choices in that regard.  Mostly likely we will sell CPU in increments from 50% of a single core (CPU), up to as many as 4 cores of dedicated resources in a shared hosting environment.  This has all the benefits and cost savings of a shared server system with the performance of high end dedicated servers.

We are willing to license this solution to a minimal group (At least for the first 60 days) of other Cpanel hosts if you are interested.  The general cost would be $125 per 8 core server for 12 months.  The cost is $75 for 12 months if using a 4 core server.  This solution does work on dual core systems but is designed to be most efficient with more cores.  This cost would include our disk I/O throttling solution as well (Well discussed in previous blog entries of mine).  You can FULLY EXPECT to double your user density with MUCH better speeds for your customers with these two solutions in place.  This is not marketing hype or extreme case situations.  It really works that well.

Requirements for hosts that want to use this product –

1) Must use linux with a 2.6.28 kernel or newer (Sorry, backporting beyond 2.6.28 is a nightmare!)
2) Must be willing to apply a small kernel patch (Wish there was a way around this, but we do have to modify the kernel to make the magic happen!)  We will assist with applying the patch if there are any problems.
3) Must be willing to run two binary files that we will provide – cpud (Our cpu controller) and iothrottled (Our disk i/o bandwidth, iops) manager.  We will make the source available for eview once we have the legal issues on our end taken care of, but for now it is two binaries.
4) The CPU controller (Once the kernel portion is done) takes about 5 minutes to set up, literally!! And iothrottled takes about 10 minutes to setup and configure.
5) Must trust that Bluehost/Hostmosnter would actually sell a product to everyone else to compete with ourselves :)

If you are interested in licensing it or testing it out (Must be at least 10 servers or more if you want to test it out before buying at this time) then please email me directly with your contact information at matt@bluehost.com.

Matt Heaton / President Bluehost.com

Palm vs Apple (iTunes Analysis That No One Else Is Talking About)

July 24th, 2009

As the smartphone market increases and Apple gobbles up market share from its competitors what can smaller companies do to compete?

First, copy the good aspects of the product you are trying to compete against.  Next, improve your product in the areas that your competitors are deficient, and finally,  if you are smaller and more nimble you lower your price.  You lower the price because you are too small for the big boys to adjust their pricing in contrast to you.

This is exactly what Palm is trying to do with the Palm Pre smartphone.  Its a very good first try, and I think the phone has great potential, but right now its not up to snuff when compared to the iphone.  However,  Palm is doing something that I think Apple is very scared about.

The Palm Pre is (Sometimes) compatible with iTunes.  Apple doesn’t open iTunes to outside devices, so syncing doesn’t work on anything except Apple Ipods/Touches/Iphones, etc.  The Palm Pre masquerades as an older Ipod in order to sync with iTunes.

When the Palm Pre launched it worked great with iTunes, then about a week ago Apple released a “fix” for iTunes that broke compatibility with the Pre.  Palm has since released version 1.10 of webOS that has a fix for the fix that broke compatibility.  I suspect Apple and Palm will play this cat and mouse game for a while, but here is the part that almost no one is talking about –

Apple is scared to death to sue Palm and here’s why – If Apple sues Palm and loses they just don’t lose to Palm, they lose to everyone.  Right now the issue is still on legal quicksand.  There is all kinds of legal precedent that backs up Palm’s reverse engineering of the hooks into iTunes.  Many larger companies are too afraid of the legal implications of tangling with Apple, but Palm is a cornered beast.  They really don’t have a lot to lose, and Apple has everything to lose.  Palm knows this.  If Apple sues and loses be prepared for a tidal wave of “ipods” (Every smartphone, mp3 player, and modified 1985 walkman) connecting to iTunes in a matter of weeks.

There is nothing “technical” holding back devices from connecting to iTunes, its all legal threats keeping devices off right now.  I suspect in the end that Apple will “decide” (be forced) to license devices to iTunes rather than risk losing the lawsuit and getting no license fees.

I’m a HUGE Apple fan, but I’m totally rooting for Palm on this one!!!

Matt Heaton / President Bluehost.com