Sorting
Deployments found: 18
"At Core Digital Media, we've integrated Alexa for Business with our enterprise BI platform, MicroStrategy, for three primary reasons. First, to empower our executives and leaders with real-time business KPI updates so that they can ask Alexa anytime, anywhere. Second, to make meetings more productive by providing easy access to data-related questions that need immediate answers so teams could make smarter decisions faster as a group. Last, we strive for continuous innovation and believe voice is the future of UX. Alexa for Business is a great way to implement conversational interfaces that remove barrier between Human and Computer/Data.” -Willy Custodio, Manager of Business Intelligence, Core Digital Media
"At Express Dedicated LLC, we take pride in providing the best service to our customers. Knowing the location of the trucks and ensuring their seamless operation is critical for our business. With Alexa for Business, we built a private skill integrating our management solution, so we can get the location of the trucks just by asking Alexa. We are expanding our work with Alexa for Business, and are building a complete voice enabled truck management solution so we can, using Alexa, proactively notify drivers and dispatchers if they are in violations of hours of service and take action as required by Federal Motor Carrier Safety Administration." - Kevin Ramroop, Chief Financial Officer, Express Dedicated LLC
"Ryanair is moving all of our audio, video and web conferencing to Amazon Chime. Reliable, on-time communication is as critical to Ryanair as our on-time flights. We operate over 2,000 flights every day, carrying over 150 million customers annually, connecting 37 countries. With Chime, meetings auto-call participants to start on time, allowing our operations teams in over 200 airports help maintain our industry leading punctuality (93% of Ryanair flights arrived on time in February, 2019). During our evaluation of Chime, we found the user training needs were minimal because the solution is so intuitive to use. We are also exploring the use of Chime video for recruitment, which will make it more convenient for candidates to join interviews remotely. Chime is helping improve our communications experience for our employees, which helps us continue to focus on the most important part of our business – our customers and offering them the lowest fares.", John Hurley, Chief Technology Officer - Ryanair
Why Amazon Web Services After monitoring multiple CDNs for a few weeks, PBS Interactive found that CloudFront had a significantly lower error rate than the incumbent CDN. As a result, they migrated the majority of PBS videos to Amazon S3 storage and delivered them via Amazon CloudFront. PBS Interactive completed the migration of its content into Amazon S3 within a matter of weeks and subsequently began delivering that content via Amazon CloudFront. Since the migration, PBS Interactive says it has experienced fifty percent fewer errors in its video streaming performance. The department also conducts testing more quickly with the help of Amazon CloudFront’s invalidation request feature and by analyzing CloudFront logfiles. This feature improves PBS Interactive’s testing by rapidly removing bad files and quickly refreshing its cache. Engelson believes that “Amazon CloudFront fits well with the other AWS services used by PBS. The team members have enjoyed their conversations with the AWS team as they have migrated to Amazon CloudFront, and they were pleased when the Amazon CloudFront invalidation feature was released shortly after they needed that feature.”
The Benefits Today, PBS Interactive is delivering nearly all of its streaming video through Amazon CloudFront. This equates to more than one petabyte of video content delivered every month. In addition, PBS Interactive uses multiple third-party providers to transcode and segment mobile video assets, which are then delivered through Amazon CloudFront to PBS’ mobile apps for the Apple iPhone and iPad. Engelson says, “As with all the AWS services we leverage, using Amazon CloudFront is so simple and reliable that the team doesn’t have to think about it. It all just works, freeing us to focus on building cool applications.” He concludes, “We are extremely pleased with the performance and ease of use that CloudFront offers for streaming videos to different devices. With fewer errors, CloudFront delivers a great experience to our viewers, and that’s very important for the success of our business.”
"In a proof-of-concept phase that lasted about three business days, I was able to bring Amazon Connect up, take a simple call flow, and seamlessly integrate with our CRM system," says Sondhi. "Once we started putting Amazon Connect into production, we trained hundreds of associates in just 30 minutes each and achieved 100 percent adoption for our direct bank and fraud operations in just five months. That’s more than twice as fast as prior migrations of this magnitude have taken."
“The contact center has always been the lifeblood of customer support, which is our key differentiator,” says Kerry Bowley, product manager at Rackspace. “When we tried to modernize on top of our legacy system, we hit roadblock after roadblock.”
Franco Lazzarino, software developer at Rackspace, concurs: “Our team’s development skillset was not highly aligned with the telecom niche. Even basic call control and monitoring required significant engineering effort.”Then, Rackspace discovered Amazon Connect—the self-service, cloud-based contact center service built on Amazon Web Services (AWS). Amazon Connect is based on the same contact center technology used by Amazon customer service associates around the world.
“We were locked into proprietary hardware and paying for an expensive service that was not particularly robust,” says Tim Choate, CEO and founder of RedAwning. “We didn’t have the features such as call monitoring and tracking that we needed to drive efficiency, and our agents were tied down to a very limited number of locations.”Those are a few of the reasons that RedAwning moved to Amazon Connect, a self-service, contact center service that runs on Amazon Web Services (AWS). Based on the same contact center technology used by Amazon customer service associates around the world to power millions of customer conversations, Amazon Connect enables RedAwning to deliver better customer service at lower cost Using Amazon Connect, RedAwning has gained major new capabilities—including easy-to-deploy virtual agents powered by artificial intelligence—while cutting costs by 80 percent compared to its previous contact center solution. RedAwning pays by the minute for usage and has no infrastructure to manage, enabling it to scale without adding staff or incurring capital costs. Given that RedAwning has tripled in size annually since its founding, these benefits are critically important to its business success.
The Challenge
As a successful media and entertainment company, LIONSGATE was faced with IT challenges that confront many growing businesses:
- Ever-expanding infrastructure and costs
- Increasing enterprise application workloads
- Tighter time-to-market requirements
Why Amazon Web Services Theresa Miller, Executive Vice President, Information Technology for LIONSGATE, explains why the company decided to enlist Amazon Web Services (AWS) to help them meet these objectives: “The economics were compelling. AWS cloud services proved to be easy to use via the Management Console, APIs, and tools. The system is secure and flexible to work with. Also, working with AWS as a company was a very positive experience.”
LIONSGATE started using the following AWS products in 2010:
- Amazon Simple Storage Service (Amazon S3) for storage
- Amazon Elastic Compute Cloud (Amazon EC2) for compute
- Amazon Elastic Block Store (Amazon EBS) for Amazon EC2 storage
"A Shared File System should be easy to set up and scale to your needs as you grow, with minimal effort. Taking advantage of Amazon EFS, our customers can deploy JIRA Data Center clusters through CloudFormation templates with only a few clicks."- Brad Bressler, Technical Account Manager
About Atlassian
Atlassian is an enterprise-software company that project managers, software developers, and content managers use to work more effectively in teams. Its primary application is an issue-tracking solution called JIRA. Atlassian has more than 1,800 employees serving more than 68,000 customers and millions of users.The Challenge
At Atlassian, growth is on a fast track. The company adds more customers every day and consequently needed an easy way to scale JIRA, which is growing by 15,000 support tickets every month. The instance supporting this site was previously hosted in a data center, which created challenges for scaling. “The scale at which we were growing made it difficult to quickly add nodes to the application,” says Brad Bressler, technical account manager for Atlassian. “This is our customer-facing instance, which gathers all the support tickets for our products globally. It’s one of the largest JIRA instances in the world, and growing and maintaining it on premises was getting harder to do.” For example, the support.atlassian.com instance was hosted on a single on-premises server, which the company needed to frequently take down for maintenance. The company also needed to ensure high availability for JIRA.“This is a mission-critical application, and the number of customers potentially impacted by downtime is huge,” says Neal Riley, principal solutions engineer for Atlassian. “As we grew, we became more concerned about the resiliency and disaster-recovery capabilities of the data center.”
To move into a more scalable, highly available environment, Atlassian created JIRA Data Center, a new enterprise version of the application. However, JIRA Data Center required shared storage.“We needed a shared file system so the individual application nodes could have a shared source of truth for profile information, plug-ins, and attachments,” says Riley.
Why Amazon Web Services
Atlassian also needed to respond to customers wanting to run JIRA on the Amazon Web Services (AWS) Cloud. “We initially looked at several vendors, but AWS was the clear leader,” Bressler says. “We needed automatic scaling and reliability, and AWS offered us that.” The company migrated JIRA Data Center to the AWS Cloud, running all application nodes on Amazon Elastic Compute Cloud (Amazon EC2) instances. Atlassian takes advantage of Auto Scaling groups to enable automatic scaling of both applications, and uses Elastic Load Balancing to redirect application traffic to Amazon EC2 instances for consistent performance. After evaluating several options for JIRA shared storage on AWS, Atlassian chose to use Amazon Elastic File System (Amazon EFS) to support attachments and log-application files for support.atlassian.com.“Amazon EFS gives us an easy way to scale our customer-facing instances of JIRA, so our teams can more quickly jump on support cases,” says Bressler.
The company then created an AWS CloudFormation template for deploying JIRA Data Center on AWS. Atlassian also takes advantage of Amazon CloudWatch to monitor JIRA. “We’re using CloudWatch to monitor RAM usage and bandwidth, so we can more easily optimize the application,” says Bressler. Because the company believes in using its own software, Atlassian also deploys JIRA Data Center for internal support tickets, which it runs on AWS.The Benefits
Prior to using Amazon EFS as a shared file system for JIRA Data Center, Atlassian tested the solution internally. During testing, the company discovered the technology was simple to set up and enabled consistent throughput and capacity that stayed within threshold. “Once we went live, everything worked exactly as we expected it to,” says Bressler. “It was performant, resilient, and easy to set up, and it is easy to maintain.” Using Amazon EFS, Atlassian customers can now run an enterprise version of JIRA in the cloud. “A Shared File System should be easy to set up and scale to your needs as you grow, with minimal effort,” says Bressler. “Taking advantage of Amazon EFS, our customers can deploy JIRA Data Center clusters through CloudFormation templates with only a few clicks.” Because it can more easily manage its JIRA instances in the cloud, Atlassian is putting more effort into enhancing applications. “By moving to the AWS Cloud, our company has been able to focus more on what we do well: providing great services to our customers,” says Bressler. “Instead of having to spend time on managing the back-end application stack, we can really step up our game and better support our tens of thousands of global customers.”By moving to the cloud, Atlassian is also able to efficiently grow JIRA Data Center. “We can much more rapidly scale our application using Amazon EFS and Auto Scaling,” says Riley. “If we had an event that required us to add 10,000 customers, it would previously have taken weeks, if not months, to plan for it because of the complexity. Using AWS, we have everything in place to support that traffic immediately.”
Atlassian is better supporting its customers by utilizing the built-in disaster recovery and high availability of AWS. “We have better disaster-recovery capabilities and better uptime because our application data is replicated across multiple AWS Availability Zones,” says Riley. “If our application instances go down, we’re stopping thousands of people from getting support. By moving to a highly available platform on AWS, we are much more confident that our solutions are available at all times.” The company will likely migrate more applications to AWS in the coming months. Riley says, “We trust in AWS to help us grow our company in a flexible and cost-effective way, and we will be expanding our relationship with AWS well into the future.”Yelp has established a loyal consumer following, due in large part to the fact that they are vigilant in protecting the user from shill or suspect content. Yelp uses an automated review filter to identify suspicious content and minimize exposure to the consumer. The site also features a wide range of other features that help people discover new businesses (lists, special offers, and events), and communicate with each other. Additionally, business owners and managers are able to set up free accounts to post special offers, upload photos, and message customers. The company has also been focused on developing mobile apps and was recently voted into the iTunes Apps Hall of Fame. Yelp apps are also available for Android, Blackberry, Windows 7, Palm Pre and WAP. Local search advertising makes up the majority of Yelp’s revenue stream. The search ads are colored light orange and clearly labeled “Sponsored Results.” Paying advertisers are not allowed to change or re-order their reviews.
Why Amazon Web Services Yelp originally depended upon giant RAIDs to store their logs, along with a single local instance of Hadoop. When Yelp made the move to Amazon Elastic MapReduce (Amazon EMR), they replaced the RAIDs with Amazon Simple Storage Service (Amazon S3) and immediately transferred all Hadoop jobs to Amazon Elastic MapReduce.
“We were running out of hard drive space and capacity on our Hadoop cluster,” says Yelp search and data-mining engineer Dave Marin. Yelp uses Amazon S3 to store daily logs and photos, generating around 1.2TB of logs per day. The company also uses Amazon EMR to power approximately 20 separate batch scripts, most of those processing the logs. Features powered by Amazon Elastic MapReduce include:
- People Who Viewed this Also Viewed
- Review highlights
- Auto complete as you type on search
- Search spelling suggestions
- Top searches
- Ads
The Challenge Historically, inmates at correctional facilities were not allowed access to computers with Internet access for fear that access would allow them to harass victims or plan crimes. Technical complexities and a lack of local resources made it nearly impossible to provide online learning in prisons.
The Louisiana Department of Public Safety and Corrections wanted to improve inmate education, and post-prison outcomes, by implementing a new IT environment to support a better and more reliable online learning solution. It also needed to ensure system security so inmates had no access to the Internet.
It sought to replace the on-premises system that hosted the learning solution due to frequent technical problems that often led to downtime. The agency also wanted to eliminate the need for its small IT team to manage the solution or spend time keeping outdated technology up and running.
It sought an easier way to update training content and cost-effectively expand the program to additional correctional facilities.
The Solution The Louisiana Department of Public Safety and Corrections worked with ATLO Software, a provider of secure educational solutions for correctional facility students, to deploy educational training labs at four Louisiana correctional facilities.
Each lab consists of 10 workstations running Amazon WorkSpaces, a managed, secure desktop computing service that runs in the Amazon Web Services (AWS) cloud. The lab configuration uses a multilayered security approach, combining Amazon WorkSpaces with a secure network within an Amazon Virtual Private Cloud (Amazon VPC).
Using Amazon WorkSpaces along with ATLO educational software, the department can quickly get a new training lab up and running, making it cost-effective and simple to expand the program to additional facilities. Inmates use Amazon WorkSpaces to access a personal ATLO account, which tracks their coursework and test results. The solution is locked down so inmates can only access their ATLO account and not the public Internet.
The Benefits Enables better inmate outcomes. Using the onsite labs, inmates can pursue college credits or degrees, receive vocational training, and learn about career opportunities available to them once they are released from prison. “Rehabilitation through education is now a reality thanks to ATLO and Amazon WorkSpaces,” says Dawson Andrews, IT director of Louisiana Department of Corrections. “There is less chance of these inmates recycling back into the system. This is not only a benefit to the inmates themselves, it is a benefit to their community and future generations.” The solution has also made it possible for the department to partner with local companies to create job opportunities Better security. With the integration of Amazon WorkSpaces, ATLO software, and Amazon VPC, the department of corrections can confidently offer a secure learning program and prevent inmate access to locations outside the learning environment. The AWS security model makes it possible for the department to offer a connected solution—essential for delivering updated, relevant courseware and tracking progress. Ensures high availability. By enabling a more reliable environment for the web-based learning system, the department can help inmates concentrate on their education instead of worrying why software and systems aren’t working. Speeds deployment. The department’s IT team can get new connected training labs up and running in as little as 90 minutes. This results in three major benefits: It’s easy to roll out training labs in new facilities, to keep content up to date, and to add new content at any time. Reduces the need for IT staff. The department’s IT staff no longer needs to spend time managing servers and manually deploying software updates. Now, software updates can be pushed to any lab or workstation by restarting the zero clients.
Why Amazon Web Services Coinbase evaluated different cloud technology vendors in late 2014, but it was most confident in Amazon Web Services (AWS). In his previous role at NASA’s Jet Propulsion Laboratory, Witoff gained experience running secure and sensitive workloads on AWS. Based on this, Witoff says he “came to trust a properly designed AWS cloud.” The company began designing the new Coinbase Exchange by using AWS Identity and Access Management (IAM), which securely controls access to AWS services. “Cloud computing provides an API for everything, including accidentally destroying the company,” says Witoff . “We think security and identity and access management done correctly can empower our engineers to focus on products within clear and trusted walls, and that’s why we implemented an auditable self-service security foundation with AWS IAM.” The exchange runs inside the Coinbase production environment on AWS, powered by a custom-built transactional data engine alongside Amazon Relational Database Service (Amazon RDS) instances and PostgreSQL databases. Amazon Elastic Compute Cloud (Amazon EC2) instances also power the exchange. The organization provides reliable delivery of its wallet and exchange to global customers by distributing its applications natively across multiple AWS Availability Zones. Coinbase created a streaming data insight pipeline in AWS, with real-time exchange analytics processed by an Amazon Kinesis managed big-data processing service. “All of our operations analytics are piped into Kinesis in real time and then sent to our analytics engine so engineers can search, query, and find trends from the data,” Witoff says. “We also take that data from Kinesis into a separate disaster recovery environment.” Coinbase also integrates the insight pipeline with AWS CloudTrail log files, which are sent to Amazon Simple Storage Service (Amazon S3) buckets, then to the AWS Lambda compute service, and on to Kinesis containers based on Docker images. This gives Coinbase complete, transparent, and indexed audit logs across its entire IT environment. Every day, 1 TB of data—about 1 billion events—flows through that path. “Whenever our security groups or network access controls are modified, we see alerts in real time, so we get full insight into everything happening across the exchange,” says Witoff . For additional big-data insight, Coinbase uses Amazon Elastic MapReduce (Amazon EMR), a web service that uses the Hadoop open-source framework to process data, and Amazon Redshift, a managed petabyte-scale data warehouse. “We use Amazon EMR to crunch our growing databases into structured, actionable Redshift data that tells us how our company is performing and where to steer our ship next,” says Witoff . All of the company’s networks are designed, built, and maintained through AWS CloudFormation templates. “This gives us the luxury of version-controlling our network, and it allows for seamless, exact network duplication for on-demand development and staging environments,” says Witoff . Coinbase also uses Amazon Virtual Private Cloud (Amazon VPC) endpoints to optimize throughput to Amazon S3, and Amazon WorkSpaces to provision cloud-based desktops for global workers. “As we scale our services around the world, we also scale our team. We rely on Amazon WorkSpaces for on-demand access by our contractors to appropriate slices of our network,” Witoff says. Coinbase launched the U.S. Coinbase Exchange on AWS in February 2015, and recently expanded to serve European users.
The Benefits Coinbase is able to securely store its customers’ funds using AWS. “I consider Amazon’s cloud to be our own private cloud, and when we deploy something there, I trust that my staff and administrators are the only people who have access to those assets,” says Witoff . “Also, securely storing bitcoin remains a major focus area for us that has helped us gain the trust of consumers across the world. Rather than spending our resources replicating and securing a new data center with solved challenges, AWS has allowed us to hone in on one of our core competencies: securely storing private keys.” Coinbase has also relied on AWS to quickly grow its customer base. “In three years, our bitcoin wallet base has grown from zero to more than 3 million. We’ve been able to drive that growth by providing a fast, global wallet service, which would not be possible without AWS,” says Witoff . Additionally, the company has better visibility into its business with its insight pipeline. “Using Kinesis for our insight pipeline, we can provide analytical insights to our engineering team without forcing them to jump through complex hoops to traverse our information,” says Witoff . “They can use the pipeline to easily view all the metadata about how the Coinbase Exchange is performing.” And because Kinesis provides a one-to-many analytics delivery method, Coinbase can collect metrics in its primary database as well as through new, experimental data stores. “As a result, we can keep up to speed with the latest, greatest, most exciting tools in the data science and data analytics space without having to take undue risk on unproven technologies,” says Witoff . As a startup company that built its bitcoin exchange in the cloud from day one, Coinbase has more agility than it would have had if it created the exchange internally. “By starting with the cloud at our core, we’ve been able to move fast where others dread,” says Witoff . “Evolving our network topology, scaling across the globe, and deploying new services are never more than a few actions away. This empowers us to spend more time thinking about what we want to do instead of what we’re able to do.” That agility is helping Coinbase meet the demands of fast business growth. “Our exchange is in hyper-growth mode, and we’re in the process of scaling it all across the world,” says Witoff . “For each new country we bring on board, we are able to scale geographically and at the touch of a button launch more machines to support more users.” By using AWS, Coinbase can concentrate even more on innovation. “We trust AWS to manage the lowest layers of our stack, which helps me sleep at night,” says Witoff . “And as we go higher up into that stack—for example, with our insight pipeline—we are able to reach new heights as a business, so we can focus on innovating for the future of finance.”
With the acquisition of hardware and platform partner, AlertMe, in 2015, Centrica Connected Home was faced with the prospect of a significant shift in focus. Previously the relationship had been one of vendor-customer with AlertMe also pursuing it's own goals for expansion and licensing of its software. After the acquisition, Centrica Connected Home moved to quickly integrate the technical talent from the two companies and then to realign the development efforts of the teams. The new common goals of product evolution, feature enhancement and international launch, presented a number of challenges in the form of a rapid scaling requirement for their live platform, whilst maintaining stability and availability. Added to these demands on the company were an expansion into new markets, and brand new product launches, including smart boiler service and a growing ecosystem of new Hive smart home devices. They even found the time to develop deeply functional Alexa skills for their products and hence be a Smart Home Launch Partner for the Amazon Echo in the UK in 2016.
Why Amazon Web Services The entire end-to-end infrastructure on which the Hive Platform is based—including marketing and support websites, data collection services, and the real-time store for user and analytics data—runs on AWS technologies. The core technologies used to power Hive are Amazon Elastic Cloud Compute (Amazon EC2), Amazon Relational Database Service (Amazon RDS), and Amazon Simple Storage Service (Amazon S3). The new challenges meant they had to seek solutions in additional specialised, managed AWS services. Working with the AWS IoT Service Team under the EMEA IOT Lead for Amazon Web Services, Claudiu Pasa, they began a proof of concept project for migration from their existing device management platform to a specialised AWS IoT based service for new and existing devices. This deeper AWS integration enabled the replacement of other platform components with a leaner, faster Lambda based microservices infrastructure, with Amazon EC2 and Amazon RDS still playing a large part in their infrastructure for longer lived components such as data stores and platform UIs. Additional use of integrated AWS services such as Amazon S3 data storage and web hosting, Amazon API Gateway, Amazon Cognito and Amazon Cloudfront offer attractive benefits, when used in concert with more traditional infrastructure, such as lower latency to the customer, less scalability limitations and more resilience, allowing their engineering team to focus on systems that add value to the business such as advanced monitoring using AWS partner Wavefront, aggregated logging and application analysis using Amazon Elasticsearch Service, and cost analysis and attribution using resource tags and consolidated billing in Amazon Organisations.
The Benefits Centrica Connected Home is a great example of lean enterprise in action. Although it’s part of one of the UK’s biggest corporations, it operates in an agile way, learning quickly while delivering a cutting-edge product to hundreds of thousands of satisfied customers. “Our teams are empowered to make their own decisions and mistakes, and can pick up the tools and run with them, trying new things and innovating. AWS helps us to achieve this lean, agile infrastructure because it we can work flexibly and without constraint but within a consistent environment.”, says Adrian Heesom, COO Centrica Connected Home. Heesom continues, “Our ability to develop new features is much easier in our AWS environment. Plus, the AWS cloud delivers a consistently available hosting platform for our services. The ease of deploying resources in multiple physical AWS locations gives us confidence in the reliability of our environment.” Christopher Livermore, Centrica Connected Homes Head of Site Reliability Engineering says, "Leveraging managed, optimised services such as Amazon EC2, Amazon S3, AWS IoT, API Gateway, AWS Lambda, Amazon Cloudfront, Amazon RDS and Amazon Cognito allows our developers and engineers to focus on product delivery and its value to our customers. It abstracts away some of the common problems of operating system configuration and architecture design. It also makes it easier to maintain a good, common framework for product development across all our teams, internationally." Cost is a two-fold benefit for Centrica Connected Home. It can access a range of environments to experiment cost-effectively, while paying only for IT resources as they’re consumed. It’s a model that the team have adopted for its own products and related services.“More and more of our customers want to “pay as they go” for our Centrica Connected Home products and services,”Heesom says.“This not only aligns with the way we pay for AWS and make our finance model easier, but it enables us to focus even more resources on innovating our services further.”
After maintaining on-premises hardware and custom publishing software for nearly two decades, The Seattle Times sought to migrate its website publishing to a contemporary content management platform. To avoid the costs of acquiring and configuring new hardware infrastructure and the required staff to maintain it, the company initially chose a fully managed hosting vendor. But after several months, The Times' software engineering team found it had sacrificed flexibility and agility in exchange for less maintenance responsibility. As the hosted platform struggled with managing traffic under a vastly fluctuating load, The Seattle Times team was hamstrung in its ability to scale up to meet customer demand. Tom Bain, the software engineering manager overseeing the migration effort, says, "We had a fairly standard architecture in mind when we set out to do the migration, and we encouraged our vendor to adapt to our needs, but they struggled with the idea of altering their own business model to satisfy our very unique hosting needs."
Why Amazon Web Services To address these core scalability concerns, The Seattle Times engineering team considered several alternative hosting options, including self-hosting on premises, more flexible managed hosting options, and various cloud providers. The team concluded that the available cloud options provided the needed flexibility, appropriate architecture, and desired cost savings. The company ultimately chose Amazon Web Services (AWS), in part because of the maturity of the product offering and, most significantly, the auto-scaling capabilities built into the service. The Seattle Times' new software is built on the LAMP stack, and the added benefits of native, Linux-based cloud hosting made the most sense when choosing a new vendor. The Seattle Times developed a proof-of-concept and implementation plan, which was reviewed by a team from AWS Support. “They looked over our architecture and said, ‘Here are some things that we recommend you do, some best practices, and some lessons learned,’ ” says Rob Grutko, director of technology for The Seattle Times. “They were very helpful in making sure we were production ready.” After implementing the desired system architecture and vetting the chosen components and configuration with AWS, The Times deployed its new system in just six hours. The website moved to the AWS platform between 11 p.m. and 3 a.m. and final testing was completed by 5 a.m. — in time for the next news day.
How Seattle Times Uses AWS Seattletimes.com is now hosted in an Amazon Virtual Private Cloud (Amazon VPC), a logically isolated section of the AWS cloud. It uses Amazon Elastic Compute Cloud (Amazon EC2) for resizable compute capacity and Amazon Elastic Block Store (Amazon EBS) for persistent block-level storage volumes. Amazon Relational Database Service (Amazon RDS) serves as a scalable cloud-based database, Amazon Simple Storage Service (Amazon S3) provides a fully redundant infrastructure for storing and retrieving data, and Amazon Route 53 offers a highly available and scalable Domain Name System (DNS) web service. The Times is using Amazon CloudFront in front of several Amazon S3 buckets to distribute a huge collection of photo imagery. The combination of Amazon CloudFront and Amazon S3 is used to embed photos into news stories distributed to The Times readers with low latency and high transfer speeds. Additionally, Amazon ElastiCache serves as an in-memory “cache in the cloud” in The Times’ new configuration. The Times is also using AWS Lambda to resize images for viewing on different devices such as desktop computers, tablets, and smartphones.
The Benefits With AWS, The Seattle Times can now automatically scale up very rapidly to accommodate spikes in website traffic when big stories break, and scale down during slower traffic periods to reduce costs. “Auto-scaling is really the clincher to this,” Grutko says. “With AWS, we can now serve our online readers with speed and efficiency, scaling to meet demand and delivering a better reader experience.’’ Moreover, news images can now be rapidly resized for different viewing environments, allowing breaking-news stories to reach readers faster. “AWS Lambda provides us with extremely fast image resizing,” Grutko says. “Before, if we needed an image resized in 10 different sizes, it would happen serially. With AWS Lambda, all 10 images get created at the same time, so it’s quite a bit faster and it involves no server maintenance.” Rather than relying on a hosting service to fix inevitable systems issues, The Times now has complete control over its back-end environment, enabling it to troubleshoot problems as soon as they occur. “When an issue happens, we can go under the hood and troubleshoot to get around nearly any problem,” says Grutko. “It’s our environment, and we control it.” When the company encounters a problem that it can’t solve, it relies on AWS Support. “Our on-boarding experience was quite good with the AWS support team,” says Miles Van Pelt, senior development engineer at The Seattle Times. “It really felt like they went out of their way to answer our questions and research topics that we couldn't readily find in their extensive documentation.” By choosing AWS, The Seattle Times is now better positioned to deliver in its pursuit of being a leading-edge digital news media company. “By moving to AWS, we’ve regained the agility and flexibility we need to support the company’s journalistic mission without incurring the expense and demands required of a pile of physical hardware,” says Grutko .
About Expedia
Expedia, Inc. is a leading online travel company, providing leisure and business travel to customers worldwide. Expedia’s extensive brand portfolio includes Expedia.com, one of the world’s largest full service online travel agency, with sites localized for more than 20 countries; Hotels.com, the hotel specialist with sites in more than 60 countries; Hotwire.com, the hotel specialist with sites in more than 60 countries, and other travel brands. The company delivers consumer value in leisure and business travel, drives incremental demand and direct bookings to travel suppliers, and provides advertisers the opportunity to reach a highly valuable audience of in-market travel consumers through Expedia Media Solutions. Expedia also powers bookings for some of the world’s leading airlines and hotels, top consumer brands, high traffic websites, and thousands of active affiliates through Expedia Affiliate Network.The Challenge
Expedia is committed to continuous innovation, technology, and platform improvements to create a great experience for its customers. The Expedia Worldwide Engineering (EWE) organization supports all websites under the Expedia brand. Expedia began using Amazon Web Services (AWS) in 2010 to launch Expedia Suggest Service (ESS), a typeahead suggestion service that helps customers enter travel, search, and location information correctly. According to the company’s metrics, an error page is the main reason for site abandonment. Expedia wanted global users to find what they were looking for quickly and without errors. At the time, Expedia operated all its services from data centers in Chandler, AZ. The engineering team realized that they had to run ESS in locations physically close to customers to enable a quick and responsive service with minimal network latency. Why Amazon Web Services Expedia considered on-premises virtualization solutions as well as other cloud providers, but ultimately chose Amazon Web Services (AWS) because it was the only solution with the global infrastructure in place to support Asia Pacific customers.“From an architectural perspective, infrastructure, automation, and proximity to the customer were key factors,” explains Murari Gopalan, Technology Director. “There was no way for us to solve the problem without AWS.”
Launching ESS on AWS
“Using AWS, we were able to build and deliver the ESS service within three months,” says Magesh Chandramouli, Principal Architect.
ESS uses algorithms based on customer location and aggregated shopping and booking data from past customers to display suggestions when a customer starts typing. For example, if a customer in Seattle entered sea when booking a flight, the service would display Seattle, SeaTac, and other relevant destinations. Expedia launched ESS instances initially in the Asia Pacific (Singapore) Region and then quickly replicated the service in the US West (Northern California) and EU (Ireland) Regions. Expedia engineers initially used Apache Lucene and other open source tools to build the service, but eventually developed powerful tools in-house to store indexes and queries. By deploying ESS on AWS, Expedia was able to improve service to customers in the Asia Pacific region as well as Europe.“Latency was our biggest issue,” says Chandramouli. “Using AWS, we decreased average network latency from 700 milliseconds to less than 50 milliseconds.”
Running Critical Applications on AWS
By 2011, Expedia was running several critical, high-volumes applications on AWS, such as the Global Deals Engine (GDE). GDE delivers deals to its online partners and allows them to create custom websites and applications using Expedia APIs and product inventory tools. Expedia provisions Hadoop clusters using Amazon Elastic Map Reduce (Amazon EMR) to analyze and process streams of data coming from Expedia’s global network of websites, primarily clickstream, user interaction, and supply data, which is stored on Amazon Simple Storage Service (Amazon S3). Expedia processes approximately 240 requests per second. “The advantage of AWS is that we can use Auto Scaling to match load demand instead of having to maintain capacity for peak load in traditional datacenters,” comments Gopalan. Expedia uses AWS CloudFormation with Chef to deploy its entire front and backend stack into its Amazon Virtual Private Cloud (Amazon VPC) environment. Expedia uses a multi-region, multi-availability zone architecture with a proprietary DNS service to add resiliency to the applications. Figure 2 demonstrates the architecture of the GDE service on AWS. Expedia can add a new cluster to manage GDE and other high volume applications without worrying about the infrastructure.“If we had to host the same applications on our on-premises data center, we wouldn’t have the same level of CPU efficiency,” says Chandramouli. “If an application processes 3,000 requests per second, we would have to configure our physical servers to run at about 30 percent capacity to avoid boxes running hot. On AWS, we can push CPU consumption close to 70 percent because we can always scale out. Fundamentally, running in AWS enables a 230 percent CPU consumption efficiency in data processing. We run our critical applications on AWS because we can scale and use the infrastructure efficiently.”
Using IAM to Manage Security
To simplify the management of GDE, Expedia developed an identity federation broker that uses AWS Identity and Access Management (AWS IAM) and the AWS Security Token Service (AWS STS). The federation broker allows systems administrators and developers to use their existing Windows Active Directory (AD) accounts to single sign-on (SSO) to the AWS Management Console. In doing so, Expedia eliminates the need to create IAM users and maintain multiple environments where user identities are stored. Federation broker users sign into their Windows machines with their existing Active Directory credentials, browse to the federation broker, and transparently log into the AWS Management Console. This allows Expedia to enforce password and permissions management within their existing directory and to enforce group policies and other governance rules. Additionally, if an employee ever leaves the company or takes a different role, Expedia simply make changes to Active Directory to revoke or changes AWS permissions for the user instead of inside of AWS.Standardizing Application Deployment
The success of the ESS and GDE services sparked interest from other Expedia development teams, who began to use AWS for regional initiatives. By 2012, Expedia was hosting applications in the US East (Northern Virginia), EU (Ireland), Asia Pacific (Singapore), Asia Pacific (Tokyo), and US West (Northern California) Regions. Expedia Worldwide Engineering culled best practices from these initiatives to create a standardized deployment setup across all Regions. As Jun-Dai Bates-Kobashigawa, Principal Software Engineer explains,“We’re using Chef to automate the configuration of the Amazon Elastic Compute Cloud (Amazon EC2) servers. We can take any AWS image and use scripts stored in Chef to build a machine and spin up an instance customized for a team in just in a few minutes.”
The team consolidated all AWS accounts under one AWS account and provisioned one Amazon VPC network in each Region. This allows each Region to have an isolated infrastructure with a separate firewall, application layer, and database layer. Expedia applies Amazon EC2 Security Group firewall settings to safeguard applications and services. Amazon VPC is completely integrated into Expedia’s lab and production environments.“The Amazon VPC experience for the developer is totally seamless,” says Bates-Kobashigawa. “Developers use the same Active Directory service for authentication and may not even know that some of the servers that they log onto are running on AWS. It feels like a physical infrastructure with its own subnets and multiple layers, and it’s also easy to connect to our on-premises infrastructure using VPN.”
Expedia uses a blue-green deployment approach to create parallel production environments on AWS, enabling continuous deployment and faster time-to-market.“One of our metrics for success is the reduction of time to deploy within our teams,” says Gopalan. “We use this method to launch applications pretty quickly compared to a traditional deployment. Moreover, reducing the cost of a rollback to zero means we can be fearless with deployments.”
The Benefits
Expedia uses AWS to develop applications faster, scale to process large volumes of data, and troubleshoot issues quickly. By using AWS to build a standard deployment model, development teams can quickly create the infrastructure for new initiatives. Critical applications run in multiple Availability Zones in different Regions to ensure data is always available and to enable disaster recovery. Expedia Worldwide Engineering is working on building a monitoring infrastructure in all Regions and moving to a single infrastructure. Generally, teams have more control over development and operations on AWS. When Expedia experienced conversion issues for its Client Logging service, engineers were able to track and identify critical issues within two days. Expedia estimates that it would have taken six weeks to find the script errors if the service ran in a physical environment. Previously, Expedia had to provision servers for a full-load scenario in its data centers.“To deploy an application using our on-site facility, you have to think about the physical infrastructure,” Bates-Kobashigawa explains. “If there are 100 boxes running, you might have to take 20 boxes out to apply new code. Using AWS, we don’t have to take capacity out; we just add new capacity and send traffic to it.”
Chandramouli comments, “When I was developer, you didn’t want to invest in architecture if you didn’t know how the application would turn out. I had to plan upfront and build a proof of concept to present to stakeholders. By using AWS, I’m not bound by throughput limitations or CPU capacity. When I think of AWS, freedom is the first word that comes to mind.”
The Challenge
- Supports pipeline with billions of data points uploaded every day from different mobile applications running Localytics analytics software.
- Engineering team needed to access subsets of data for creating new services, but this led to additional capacity planning, utilization monitoring, and infrastructure management.
- Platform team wanted to enable self-service for engineering teams.
The Solution
- Uses AWS to send about 100 billion data points monthly through Elastic Load Balancing to Amazon Simple Queue Service, then to Amazon Elastic Compute Cloud, and finally into an Amazon Kinesis stream.
- For each new feature of the marketing software, a new microservice using AWS Lambda is created to access the Amazon Kinesis data stream. Each microservice can access the data stream in parallel with others.
The Benefits
- Decouples product engineering efforts from the platform analytics pipeline, enabling creation of new microservices to access data stream without the need to be bundled with the main analytics application.
- Eliminates the need to provision and manage infrastructure to run each microservice .
- Lambda automatically scales up and down with load, processing tens of billions of data points monthly.
- Speeds time to market for new customer services, since each feature is a new microservice that can run and scale independently of every other microservice.
The ROI4CIO Deployment Catalog is a database of software, hardware, and IT service implementations. Find implementations by vendor, supplier, user, business tasks, problems, status, filter by the presence of ROI and reference.