Use of Cloud Computing and Virtualization in the Time of Recession

Cloud Computing on Ulitzer

Subscribe to Cloud Computing on Ulitzer: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Cloud Computing on Ulitzer: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Cloud Computing Authors: Elizabeth White, Zakia Bouachraoui, Liz McMillan, Pat Romanski, Roger Strukhoff

Related Topics: Cloud Computing, Virtualization Magazine, Enterprise Mashups, Cloud Interoperability, Cloudonomics Journal, Twitter on Ulitzer

Twitter: Blog Feed Post

Twitter's API limit: Static control in a dynamic world

Twitter is, once again, feeling growing pains. This time the microblogging darling of the social networking world is proactively addressing the problem - by further rate limiting its APIs. Alex Payne, API Lead for Twitter, explained on the Twitter Developers mailing list:

“Starting later this week we’ll be limiting those on the whitelist to 20,000 requests per hour. Yes, you read that right: twenty THOUSAND requests per hour. According to our logs, this accounts for all but the very largest consumers of our API. This is essentially a
preventative measure to ensure that no one API client, even a whitelisted account or IP, can consume an inordinate amount of our resources.”

speed-limit-change-sign-resized Twitter's restrictions on API calls are reminiscent of early bandwidth management solutions which limited the amount of bandwidth an application (usually by port) could consume. Third party applications utilizing the Twitter API further rate limit by individual user, thus spreading around their limited number of API calls across their user base. Unfortunately, the limits cause more problems for the developers of other social applications that act on behalf of users and are not nearly as problematic for actual users of Twitter clients such as TweetDeck and Twhirl; the API calls the clients use are minimal compared to the number of calls used by site-based applications like SocialToo, as explained in all its mathematical glory in a SocialToo blog on the subject.

For Twitter's part, Payne says that Twitter needs to put a throttle on API access to keep the service available to all developers, and furthermore that its limit will affect fewer than 10 applications (see Is Twitter Strangling its Famous API? on ReadWriteWeb).

The Twitter back-end has had periods of not keeping up with the popularity of the service, and this new limit is clearly an attempt to get ahead of the problem, even if it annoys a few developers and hobbles some services.

           -- Rafe Needleman, "Twitter puts new limits on API calls: Who's affected"

The static control of API calls is designed to reduce the burden on Twitter's servers and ensure service for all clients during even peak periods of usage, such as Inauguration Day earlier this week, when Twitter saw 5x the average number of tweets.

The problem with static control is that it does not take into account resource availability at all. It instead computes its maximum capacity and doles out API calls allowed based on that maximum, and nothing more. That means invariably that there will be many times during the day at which there will be resources available, though API clients will not be able to take advantage of it because they are limited by a static control rather than a dynamic one.

Bandwidth management matured many years ago, a slow process through which dynamism was introduced to make more efficient use of all available resources without sacrificing end-user experience and performance as well as performance of applications and their servers. Burstable resources were the harbinger of modern concepts associated with cloud computing; indeed, the bandwidth bursting model is almost exactly the same as is used to explain cloud computing and its "elastic" nature.

One of the reasons cloud computing and virtualization is growing rapidly in popularity and gaining interest is its ability to more efficiently make use of resources. Between green computing driven both by environmental and financial "green" issues, operational efficiency is becoming a top priority for IT across a variety of organizational types.

Using all available resources possible - not wasting any - is part of such initiatives, and cloud computing combined with virtualization provide much of the basis for implementation of such environments. It's a dynamic world, where resources are allocated and de-allocated, services provisioned and de-provisioned, APIs limited and unlimited, based on current conditions in the network, on the servers, and in the applications. By factoring in all these variables resources can be distributed in a way that makes sense at the time the resource is requested.

By using static controls over its API usage, Twitter will undoubtedly leave some resources sitting idle during slower periods of usage. While most would agree it is completely reasonable for Twitter to limit usage of certain less "real-time" API calls during peak periods of use, it seems less reasonable to deny those resources to applications when the resources may, in fact, be available.


Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Reblog this post [with Zemanta]

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.