Read part two.Can you tell us, in a nutshell, what Windows Azure is?
The Windows Azure Platform is a set of cloud computing services that will release to the Web on January 1, 2010 as elements of Microsoft's new Server and Cloud Division (SCD). SCD is a newly added part of the Server and Tools Business (STB) that the STB president, Bob Muglia, runs. The Windows Azure component is a cloud-based operating system based on virtualized Windows Server 2008 instances. It offers a set of highly scalable and available persistent-data services. It also includes several data services: schemaless tables based on the entity-attribute-value (EAV) data model; free-form blobs to emulate a conventional directory-style file system; and queues for messaging between external (terrestrial) and internal (cloud) endpoints. The default data-access protocol is a RESTful ATOM 1.0 publication format established by ADO.NET Data Services (AKA Project Astoria). A .NET StorageClient library enables developers to treat table, blobs, and queues as managed .NET objects. To the relief of many .NET developers, StorageClient moved from unsupported sample code to an official .NET API in the Windows Azure November 2009 Community Technical Preview (CTP). In general terms, describe the process of creating Windows Azure applications and how they are hosted.
The local Development Fabric enables you to create and test Windows Azure applications on premises with Visual Studio 2008/2010 and enables connections to local or cloud-based data storage. Azure apps use Roles to define service endpoints. ASP.NET Web Roles support apps with a user interface; Worker Roles handle behind-the-scenes UI-less services. Developers upload completed projects to the cloud-based production fabric running in one of Microsoft's world wide data centers. Azure's Web-based management portal simplifies subscribing to and paying for (as of February 2010) compute and storage services, as well as data transfer. Uploaded projects run by default in Staging mode for initial cloud-based testing; a single click on the Azure portal's deployment page promotes the project to Production mode. In production mode, all data is replicated across three or more machines in the same data center; recovery from failures and replica rebuilds is fully automatic. Azure's Service Level Agreement (SLA) guarantees 99.95% monthly availability for data and—if you specify two or more instances—applications and services. After January 1, 2010, subscribers can specify storage and application instances in multiple data centers, a process called geolocation. This enables subscribers to protect themselves from natural or man-made disasters. In addition to the current Chicago, IL and San Antonio, TX data centers, Microsoft will add new data centers in Dublin, Amsterdam, Hong Kong, and Singapore in the first quarter of 2010. The Azure Platform also includes a cloud-based SQL component. What is this and what does it provide to developers?
The Azure Platform also includes SQL Azure Database (SADB), which Microsoft CEO Steve Ballmer calls "SQL Server for the cloud." When Microsoft announced Azure in November 2008 at its Professional Developers Conference, the Platform included Live Services, .NET Services, SQL Server Data Services (SSDS), SharePoint Services, and Microsoft Dynamics CRM Services. SSDS offered freeform, schemaless EAV tables hosted by a special version of SQL Server 2005. SSDS, later renamed SQL Data Services (SDS), overlapped the capabilities of Azure EAV tables and led to a development "choice crisis" as to which EAV version to implement. Bowing to the demands of mainstream .NET and SQL Server developers, the SDS team reversed course and introduced a fully relational cloud-based SQL Server version called SQL Azure in mid-2009. Like other Azure storage services, SQL Azure stores at least three database replicas on separate hardware in a single data center. Initially, SQL Server database are available in two sizes: a 1-GB Web Edition at $9.95 per month and a 10-GB Business Edition for $99.95 per month. Potential SQL Azure users were put off by the lowly 10-GB size limit, which was needed to enable reasonably fast recovery from storage failures. SQL Azure team members promised at PDC 2009 an increase in maximum database storage, but they wouldn't disclose the size of the increment. The SQL Azure team promises future support for OLAP and business intelligence features, as well as a data-synchronization feature called DataHub to be implemented by Project Huron. The Microsoft Sync Framework (Sync Fx) and its new Power Pack for SQL Azure November CTP go a long way toward achieving Project Huron's goals, but OLAP and BI capabilities probably will arrive later in 2010. Ray Ozzie introduced at PDC 2009 Project Dallas, a new service that lets users discover, purchase, and manage premium data subscriptions in the Windows Azure platform. A Dallas Service Explorer lets you "visually construct REST API queries and preview the content in XML, ATOM, RAW (for blob and real-time content), or in Table view (for structured data)." Project Dallas will undoubtedly contribute to the DataHub in 2010. You mentioned that the original vision of Windows Azure also included Live Services, SharePoint Services, and .NET Services components. What became of these components?
Live Services, SharePoint, and Microsoft Dynamics CRM services were factored out of the Azure Platform in the transition to SCD, while .NET Services were renamed to Windows Azure AppFabric. .NET Services originally included Access Control Services (ACS), a Service Bus with its own queues and routers, and Workflow features, but the team dropped Workflow in mid-2009 in favor of waiting for the vastly improved Workflow services in .NET Framework 4 and VS 2010. Service Bus queues and routers were victims of breaking changes in the November 2009 CTP, but the new Windows Identity Framework (WIF, formerly called Project Geneva) simplifies implementation of ACS-based federated identity authentication and authorization services. Is it possible for IT departments to host their own Windows Azure applications on premises, rather than on Microsoft's servers?
One of cloud computing's primary bugaboos is vendor lock-in to a single supplier's data center infrastructure and the corresponding inability to bring cloud-based applications and data back to the organization's premises quickly and easily. There's much current conversation in blogs and the trade press about bringing the Azure Platform on premises to create private or hybrid clouds. During 2008 and most of 2009, the Azure team steadfastly denied they had any intention of enabling third-party organizations to clone Microsoft data-center software. However, Microsoft announced at PDC 2009 Project Sydney, which uses IPv6, IPSec, and WIF capabilities to enable developers to fail cloud apps over to on-premises servers. Azure senior architect Hasan Alkhatib is reported to have said at an Xconomy forum held at Microsoft's Cambridge, MA Research and Development Center in early December 2009: "Every customer says, 'where can we get a private cloud?' We're building them. Within a short period of time, private clouds will be available with the same technology we've used to build Windows Azure." In other words, private and hybrid Azure clouds are inevitable.
Read part two.
About Roger Jennings
Roger Jennings is the principal consultant of OakLeaf Systems and the author of 30+ books about Microsoft operating systems (Windows NT and 2000 Server), databases (SQL Server and Access), .NET data access, Web services, and InfoPath 2003.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Patrick Meader conducted the interview with Roger Jennings. Patrick is a freelance editor and writer with more than 16 years experience working for technical magazines.