Oops! Looks like we're having trouble connecting to our server.
Refresh your browser window to try again.
About this product
Product Identifiers
PublisherO'reilly Media, Incorporated
ISBN-101491905778
ISBN-139781491905777
eBay Product ID (ePID)203645503
Product Key Features
Number of Pages150 Pages
LanguageEnglish
Publication NameGetting Started with Impala : Interactive Sql for Apache Hadoop
SubjectClient-Server Computing, Programming Languages / Sql, Data Processing, Databases / General
Publication Year2014
TypeTextbook
AuthorJohn Russell
Subject AreaComputers
FormatTrade Paperback
Dimensions
Item Height0.5 in
Item Weight10 Oz
Item Length9.1 in
Item Width6.9 in
Additional Product Features
Intended AudienceScholarly & Professional
LCCN2015-472013
Dewey Edition23
Dewey Decimal005.75
SynopsisLearn how to write, tune, and port SQL queries and other statements for a Big Data environment, using Impala--the massively parallel processing SQL query engine for Apache Hadoop. The best practices in this practical guide help you design database schemas that not only interoperate with other Hadoop components, and are convenient for administers to manage and monitor, but also accommodate future expansion in data size and evolution of software capabilities. Written by John Russell, documentation lead for the Cloudera Impala project, this book gets you working with the most recent Impala releases quickly. Ideal for database developers and business analysts, the latest revision covers analytics functions, complex types, incremental statistics, subqueries, and submission to the Apache incubator. Getting Started with Impala includes advice from Cloudera's development team, as well as insights from its consulting engagements with customers. Learn how Impala integrates with a wide range of Hadoop components Attain high performance and scalability for huge data sets on production clusters Explore common developer tasks, such as porting code to Impala and optimizing performance Use tutorials for working with billion-row tables, date- and time-based values, and other techniques Learn how to transition from rigid schemas to a flexible model that evolves as needs change Take a deep dive into joins and the roles of statistics