I have spent a lot of time talking about Ruby and Rails with people recently. When you're in that domain, you can't help but have people asking you,
"So, is it true? Rails can't scale?"
It's a fair question,
if somewhat naive. A lot of people have heard about the issues that Twitter has had as it has become the monolithic application that it is. It's been blamed squarely on Rails, even by some of Twitter's management.
It's just not that simple. The kind of application that will service the amount of requests they are seeing is not the kind of application that happens on accident. However, I'm not going to attempt to answer whether or not Rails can scale, because
I believe the question is fundamentally flawed. Instead, I want to talk about the concept of scalability, and why
your question shouldn't be
"Can X scale," but rather,
"How does X scale?"
Building an application is a bit like constructing a building. You have to choose the right materials, tools and techniques. You have to plan in advance. You have to make trade offs about durability, flexibility, and all the other
-ilities. Most web sites out there require very little scalability, because they'll never see more than a request every ten seconds. Some may get lucky and hit once a second. The very best, may see more! Consider that a millions hits per day is only approximately ten hits per second. That's really not all that impressive.
There are two types of scaling that are widely accepted. "Scaling up," and "Scaling out." Each have their pros and cons, but I feel it's important to define them before considering the greater picture.
"Scaling Up" refers to the applications ability to be placed to "Big Metal"- Think old time main frames. They are applications that are meant to have one instance of the application servicing every request. This is the easiest to conceptualize. You only need one program running, and as long as you buy powerful enough hardware, you can get away with any number of requests. There is a hard limit to this kind of system, though. When you aren't parallelizing tasks, you can end up with a lot of downfalls. Such as deadlocks, stalls, and more. What happens on a hardware failure? How do you plan for that, without having a second massively expensive install? You don't. Pure and simple. It's expensive. Very expensive. But it's simple to maintain.
"Scaling Out" refers to the applications ability to be massively parallel on any number of given systems. This could be commodity level systems, on out to high powered blades (or even mainframes). It's not about the hardware. It's about the software. Run it wherever you want, they'll all cluster together. This kind of scalability requires a lot of advanced planning, and forethought to be able to run twenty applications side by side, and have them buzz along happily. This tends to be why many applications need to be reworked when they get to the point where thousands of users are accessing them regularly. But if your application is set up correctly, you can grow with it, on demand. Just by bringing up a few new servers to service more requests. Scaling out tends to be the preferred method of modern scaling needs. You don't anticipate your need, you buy hardware as you need it. Backups are only as costly as having a few extra systems standing by.
Now, taking the earlier example: Instead of having to service
a million requests per day, what happens when you have to service a
hundred million? Or more? You're now looking at more than
one thousand requests per second. The same system that can happily buzz along and handle one or two, or even ten, requests per second, will no longer be capable of handling the load. It will realistically be crushed under the weight of the load.
Crushed. You didn't plan for it, it wont be capable of it. When you build a doghouse, you don't expect it to house hundreds of people,
right?
That means that you need to think about how to handle that load. Build a foundation that can handle it- Pick tools and frameworks that you can vet.
Some key questions you really should be asking are- How many requests per second can your system service? Will they talk to each other? How? Are you persisting data? If you are, how many requests can your persistence tier handle? Can it scale out, too? How? Has someone else done what you're trying to do with the tools that you're using? At what scale? What pitfalls did they run in to? How can you avoid them?
The bottom line is... Don't fall in to the
Sucks/Rocks dichotomy. Especially if you haven't fully evaluated what you're talking about.
Remember-
Facebook is written in PHP,
YouTube is written in Python,
Twitter is written in Ruby,
Amazon's systems are written in multiple languages, as are
Google's. It's not about the language. It's about how you utilize it.