Articles

Google/ASF Tackle Big Computing Trade-Offs with Apache Beam 2.0

Trade-offs are a part of life, in personal matters as well as in computers. You typically cannot have something built quickly, built inexpensively, and built well. Pick two, as your grandfather would tell you. But apparently when you’re Google and you, in concert with the Apache Software Foundation, just delivered a grand unifying theory of programming and processing in the form of Apache Beam 2.0, those old rules may not apply anymore.

Google Software Engineer Daniel Halperin delivered a compelling session on the benefits and capabilities of Apache Beam during this week’s Apache Big Data conference in Miami, Florida. When you consider how many of the major breakthroughs in big data over the past 15 years originated with the Mountain View, California company — the Map-Reduce paper, the Google File System, which morphed into HDFS, and the Bigtable paper, which inspired a hundred NoSQL databases — then you realize it’s probably worth taking notice.

Source: datanami.com
Author: Alex Woodie

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s