"More Hadoop projects will be swept under the rug as businesses devote major resources to their big data projects before doing their due diligence, which results in a costly, disillusioning project failure."
- Gary Nakamura, Concurrent
"Spend (on big data) wisely. Follow a CRAWL - WALK - RUN strategy."
- Peter Aiken, Data Blueprint
To mine big data, you must smelt it first. Hadoop distributions and specialty software will not access or handle all the data you need, mash it, or prepare it thoroughly enough (cleansing, masking, reformatting). IRI software on the other hand, handles both big and small data sets, and lets you map once, deploy anwhere. Choose between multi-threaded file system processing in CoSort, or Hadoop MR2, Spark, Storm, or Tez processing in HDFS ... using the same Eclipse job design and metadata infrastructure.
For more than three dozen years, IRI has been the proven performer for preparing and manipulating multiple data sources across industries, geographies, and Unix/Windows platforms. Find out why you may only need:
- one affordable product, the IRI Voracity platform which discovers, integrates, migrates, governs, and analyzes data, all in:
- one simple place, a free Eclipse GUI supporting a simple 4GL, and,
- one I/O pass, combining data transformation, protection, and reporting.
Here's what you can do with IRI:
Big Data Protection - mask, encrypt, pseudonymize, de-ID, hash, tokenize, etc. data as you transform and provide it.
Big Data Provisioning - bulk load with pre-sorted data, create replicas and federated views, prepare (blend, munge) data for BI/analytic tools, write report, feed BIRT or index Splunk directly, or create big test data.
Design and manage your jobs in your choice of UIs from the same Eclipse IDE. Share, version-control, secure, and run the jobs from the GUI, or build them into batch scripts, applications, or distributed computing environments like Hadoop for even more speed and scalability.
Browse this section and its links for more details, or request a free trial.
Did you know?
Voracity uses IRI CoSort or Hadoop engines interchangeably, and that CoSort pre-dates Hadoop in big data, with technology under develoment since 1978? IRI has used the term "big data" since 2004, with Cosort deployments in telco CDR data warehousing projects on distributed hardware.
CoSort, typically used for data transformation, staging and reporting, can also do what its spin-offs do; i.e. data migration (IRI NextForm), data masking (IRI FieldShield), and test data generation (IRI RowGen).
IRI Voracity uses the same metatada and Eclipse GUI (IRI Workbench) as CoSort and its spin-offs, but also lets you design and schedule jobs with state-of-the-art ETL worfklow and built-in automation tools.
Voracity users in the IRI Workbench GUI can view their HDFS files and contents, transfer data to and from HDFS, and auto-convert their transformation and masking job scripts (and batch flows). Execution, in the file system or in HDFS, can be driven on-demand or scheduled in the same GUI where you design and manage your jobs and metadata.