For VendorsBlog
Login
AWS Amazon EMR

AWS Amazon EMR

Pitch
Easily Run and Scale Apache Spark, Hadoop, HBase, Presto, Hive, and other Big Data Frameworks

Features of product

Проблемы Problem Solving
  • Aging IT infrastructure
  • Shortage of inhouse software developers
  • Total high cost of ownership of IT infrastructure (TCO)
  • Shortage of inhouse IT resources
  • High costs of routine operations
  • IT infrastructure consumes a lot of power
  • IT infrastructure does not meet business tasks
  • Low quality of customer support
Ценности Business Values
  • Reduce Costs
  • Enhance Staff Productivity
  • Ensure Security and Business Continuity
  • Improve Customer Service
Описание

About Product

Amazon EMR provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process vast amounts of data across dynamically scalable Amazon EC2 instances. You can also run other popular distributed frameworks such as Apache Spark, HBase, Presto, and Flink in EMR, and interact with data in other AWS data stores such as Amazon S3 and Amazon DynamoDB. EMR Notebooks, based on the popular Jupyter Notebook, provide a development and collaboration environment for ad hoc querying and exploratory analysis.

EMR securely and reliably handles a broad set of big data use cases, including log analysis, web indexing, data transformations (ETL), machine learning, financial analysis, scientific simulation, and bioinformatics.

Benefits

EASY TO USE

You can launch an EMR cluster in minutes. You don’t need to worry about node provisioning, cluster setup, Hadoop configuration, or cluster tuning. EMR takes care of these tasks so you can focus on analysis. Data scientists, developers and analysts can also use EMR Notebooks, a managed environment based on Jupyter Notebook, to build applications and collaborate with peers.

LOW COST

EMR pricing is simple and predictable: You pay a per-instance rate for every second used, with a one-minute minimum charge. You can launch a 10-node EMR cluster with applications such as Hadoop, Spark, and Hive, for as little as $0.15 per hour. Because EMR has native support for Amazon EC2 Spot and Reserved Instances, you can also save 50-80% on the cost of the underlying instances.

ELASTIC

With EMR, you can provision one, hundreds, or thousands of compute instances to process data at any scale. You can easily increase or decrease the number of instances manually or with Auto Scaling, and you only pay for what you use. EMR also decouples compute instances and persistent storage, so they can be scaled independently.

RELIABLE

You can spend less time tuning and monitoring your cluster. EMR has tuned Hadoop for the cloud; it also monitors your cluster — retrying failed tasks and automatically replacing poorly performing instances. EMR provides the latest stable open source software releases, so you don’t have to manage updates and bug fixes, leading to fewer issues and less effort to maintain the environment.

SECURE

EMR automatically configures EC2 firewall settings that control network access to instances, and you can launch clusters in an Amazon Virtual Private Cloud (VPC), a logically isolated network you define. For objects stored in S3, you can use S3 server-side encryption or Amazon S3 client-side encryption with EMRFS, with AWS Key Management Service or customer-managed keys. You can also easily enable other encryption options and authentication with Kerberos.

FLEXIBLE

You have complete control over your cluster. You have root access to every instance, you can easily install additional applications, and you can customize every cluster with bootstrap actions. You can also launch EMR clusters with custom Amazon Linux AMIs.

Use cases

CLICKSTREAM ANALYSIS

EMR can be used to analyze clickstream data in order to segment users, understand user preferences, and deliver more effective ads.

REAL-TIME ANALYTICS

Consume and process real-time data from Amazon Kinesis, Apache Kafka, or other data streams with Spark Streaming on EMR. Perform streaming analytics in a fault-tolerant way and write results to S3 or HDFS.

LOG ANALYSIS

EMR can be used to process logs generated by web and mobile applications. EMR helps customers turn petabytes of un-structured or semi-structured data into useful insights about their applications or users.

EXTRACT TRANSFORM LOAD (ETL)

EMR can be used to quickly and cost-effectively perform data transformation workloads (ETL) such as - sort, aggregate, and join - on large datasets.

PREDICTIVE ANALYTICS

Apache Spark on EMR includes MLlib for scalable machine learning algorithms or you can use your own libraries. By storing datasets in-memory, Spark can provide great performance for common machine learning workloads.

GENOMICS

EMR can be used to process vast amounts of genomic data and other large scientific data sets quickly and efficiently. Researchers can access genomic data hosted for free on AWS.

Features of users

Роли заинтересованных сотрудников Stakeholders Roles
  • Chief Executive Officer
  • Chief Information Officer
Признаки применимости Company requirements
  • Web-based customer portal
Продающие истории Use Cases

Learn how Razorfish uses EMR for click stream analysis

Продающия история

The Challenge

A common issue Razorfish has found with customer segmentation is the need to process gigantic click stream data sets. These large data sets are often the result of holiday shopping traffic on a retail website, or sudden dramatic growth on the data network of a media or social networking site. Building in-house infrastructure to analyze these click stream datasets requires investment in expensive “headroom” to handle peak demand. Without the expensive computing resources, Razorfish risks losing clients that require Razorfish to have sufficient resources at hand during critical moments. In addition, applications that can’t scale to handle increasingly large datasets can cause delays in identifying and applying algorithms that could drive additional revenue. As the sample data set grows (i.e. more users, more pages, more clicks), fewer applications are available that can handle the load and provide a timely response. Meanwhile, as the number of clients that utilize targeted advertising grows, access to on-demand compute and storage resources becomes a requirement. It was thus imperative for Razorfish to implement customer segmentation algorithms in a way that could be applied and executed independently of the scale of the incoming data and supporting infrastructure.

Learn how Hearst uses Spark Streaming

Продающия история

New York-based Hearst Corporation is one of the largest diversified communications company in the world with major interests in newspapers, magazines, and television shows. The company began migrating 10 of its 29 global data centers to AWS to reduce its IT infrastructure footprint. Hearst Corporation has now migrated its websites, data warehouse, and backup solution to AWS while enabling its organization to go to market quicker in a fast-paced industry.

Materials

Work flow chart

Схема