Using Analytic Pipelines to Drain Data Swamps

Hadoop excels as a platform for storing vast amounts of data, but it may or may not be the best place for enterprises to run analytic queries.

Now enterprises may want to consider an emerging big data architecture that focuses on using analytic pipelines to prevent Hadoop from becoming a big data swamp. The idea of an analytic pipeline is nothing new. But the concept appears to be gaining steam at troubles with the data lake approach mount. One firm at the forefront of the big data analytics revolution is Pentaho, a provider of business intelligence and analytics software.

Author: Alex Woodie

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s