Only IRI Voracity® can manipulate and manage a huge range and volume of data in one affordable Eclipse™ pane of glass. Use it to rapidly and reliably discover, integrate, migrate, govern, and analyze data in every source.
"The IRI Voracity platform has caused me to rethink the relative importance of data processing in information-centric systems, such as data warehouses and lakes. With the right features and sufficient power, a data processing platform can complement and, indeed, extend the function of the databases traditionally considered to be at the core of these systems."
-Barry Devlin, 9sight Consulting
Check out Voracity's capabilities, the challenges it addresses in digital transformations, and its components, below. Explore the other tabs in this section, and the solution areas throughout this website to understand just how much your teams can cooperatively accomplish with the state-of-the-art technology in this platform.
Voracity uniquely combines the discovery, integration, migration, governance, and analysis of data in a variety of sources ... all from one place, and often in one pass. Manipulate, migrate, mask, munge, and map structured, semi-structured, and unstructured data into multiple targets at once.
Voracity addresses the challenges of data volume, variety, velocity, veracity, and value with a comprehensive data management platform that eliminates multi-tool complexity and bends the cost curve away from megavendor ETL packages and Hadoop distributions.
Voracity is powered by IRI CoSort or Hadoop engines, and everything it does is front-ended in one graphical IDE, built on Eclipse™. Beyond a massive amount of included features, a plethora of free Eclipse plug-ins and proven partner technology expand what you can do with Voracity.
Voracity's core data management capabilities leverage the functionality of the IRI CoSortSortCL data definition and manipulation program.
As one of the original and few remaining viable fast processing alternatives to Hadoop, SortCL packages, presents, and provisions big data. It combines: data cleansing, extraction, transformation, loading, masking, reporting -- and even synthetic test data generation -- in the same job script and multi-threaded I/O pass in your existing file system.
If however, you still need the scalability and capability of Hadoop, however, you are covered. Voracity supports the execution of SortCL jobs in MapReduce2, Spark, Spark Stream, Storm, and Tez. Compare that to Hadoop distributions you are considering, or to the disjointed Apache projects you are trying to coordinate.
All of that work in the middle starts with data discovery. Only Voracity provides at least four data profiling tools. And it ends with analytics, where you have three choices: embedded BI, BIRT and Splunk integrations, and/or robust data preparation for your chosen data visualization platform.
As the above schematic illustrates, Voracity supports the design, deployment and management of all these activities from a single Eclipse pane of glass, IRI Workbench:
Only Voracity delivers multiple job design and deployment options in the same Eclipse GUI. And only Voracity uses the latest CoSortengines while also supporting multiple Hadoop engine alternatives from that same GUI.
So, by embedding CoSort'smission-critical data integration, migration, and governance capabilities, supporting Hadoopengines, and front-ending discovery, EMM, MDM, and workflow in a conintuallydeveloped Eclipse IDE, Voracity is not only functionally comprehensive, it's uniquely ergonomic, scalable, and future-proofed for growing data sources and enterprise information needs.
Prepare big data subsets for analytics fast by accelerating and combining transforms in your file system - not in the BI or DB layer. Use Voracity to de-duplicate and filter, sort and join, aggregate and segment, reformat and hand-off ... all in one pass. Send prepared data in memory to BIRT at reporting time, or into cubes your app wants. See IRI's approach here.
The CoSortengine in Voracity processed big data long before it was called big data, running and combining multi-gigabyte transforms in seconds, and besting 3rd-partysort, BI, DB, and ETL tools 2-20X.
And now there are Hadoopoptions in Voracity too, distributing and scaling huge workloads across commodity hardware via MapReduce2, Spark, Spark Stream, Storm, and Tez.
What tools are you using now to discover, extract, process, and analyze all the data you gather or buy? Can you reach and process it all in one pane of glass? Can you quality-control and manage its metadata and master data in that same place? Can you analyze the data there too, or at least rapidly integrate and prepare it for external applications? If you use multiple tools, can you manage the expertise they require? Or if you use a legacy ETL platform, can you bear its cost?
Voracity analyzes, integrates, migrates, governs, profiles, and connects to some 150 different data sources and targets ... structured, semi-structured, and unstructured.
That includes legacy files, data and endiantypes, as well as popular flat and document file formats, every RDB, and newer big data and cloud/SaaS sources.
The biggest data volumes are still processed in regular batch cycles, something Voracity's native CoSortand HadoopMapReduceand Tezoptions will optimize. But what about the need to process (transform, mask, reformat) and analyze data in real-time for instant promotional campaigns (think mobile devices), or alerts (like traffic and weather notices) that can help drivers or event-goers?
Voracity includes CoSortto integrate data in memory and files, so you can process big data 6X faster than ETL tools, 10Xfaster than SQL, and 20Xfaster than BI/analytic tools. Its typical mode, including CDC, is batch.
Voracity can process real-time, near-real-time, and streaming data through Kafka or MQTT brokers, in memory via pipes or input procedures to CoSort, or in HadoopSpark or Storm engines ... all from the same Eclipse GUI, IRI Workbench. Other options include using the built-in job launcher to spawn Voracity jobs in near-real-time intervals, or using specialized BAM or CEP tools for managing event-driven activity.
Garbage in = garbage out, and thus data in doubt. Data quality suffers from inconsistent, inaccurate, or incomplete values. Social media data can be deceptive, unstructured data imprecise, and data ambiguity plagues MDM. Survey data can be biased, noisy or abnormal. Meanwhile PII and secrets contained in all that data mean you have to mask it prior to shared use. Do you have a central point of control for cleaning data and making it safe?
Voracity's data discovery, fuzzy matching, value validation, scrubbing, encrichment, and unification features all improve data quality.
Voracity's comprehensive data masking functions and synthetic test data generation capabilities remove the risk of data breaches and poor prototypes.
Consider your information and decision needs from data. For example, are you tracking consumer behavior, weather patterns, device or web log activity so that you can change promotions, make predictions, or diagnose problems? Do you see the value in an IDE easy enough for self-service data preparation and presentation, but powerful enough for IT and business user collaboration in data lifecycle management?
Voracity is the one tool that provides access to, and discovery across, the disparate data sources behind these analyses.
Only Voracity allows you blend, cleanse, mask and mungetons of data fast, and feed the results to algorithmic and visualization applications -- within the same, or another environment in the right format.
The default Voracity stack uses IRI Workbench for client-side design of data-driven jobs defined in portable scripts represented in multiple grpahical UIs.
Many of the same jobs also run interchangeably in Hadoop MR2, Spark, Spark Stream, Storm, or Tez.
Voracity metadata and related job script parameters are fully supported in the Workbench data model and optionally in Erwin (AnalytiX DS) Mapping Manager, for graphical creation, modification, and management.
Within the base Voracity package are:
- DB, flat-file, and dark data profiling, ERD, and metadata definition wizards
- key data processing features of constituent IRI Data Manager products
- key data security features of constituent IRI Data Protector products
- multiple, re-entrant job design options and execution paradigms
- runtime and metadata SDKs for application development
- robust GUI help content and CLI reference manuals
More specifically, Voracity includes free use of a rich, familiar front-end job design and management environment called IRI Workbench, built on Eclipse™. Together with Voracity's back-end production engine you can run anywhere, IRI Workbench supports the capabilities of:
- IRI CoSort for big data manipulation and movement, including EDW integration (ETL) and data preparation for DBs and analytic tools, data quality, embedded BI, metadata nad master data management, legacy sort migration, and data governance
- IRI NextForm for data and DB migration, data replication, remapping, and federation
- IRI FieldShield for masking PII in files and databases, IRI CellShield EE for Excel®, and IRI DarkShield for unstructured text.
- IRI RowGen for generating synthetic but realistic file, database, and report test data
Additional capabilities of IRI Workbench include:
- default and plug-in shell UIs for command line execution and interaction
- multiple data profiling and metadata discovery and definition wizards
- metadata management and master data management (MDM)
- ODA-integration for BIRT (visual analytics)
- Sirius workflow, transform mapping, and E-R diagrams
- Knime analytics integration with Voracity (pending)
Beyond the base edition, premium options include:
- ADS ETL Conversion | automated job/metadata migration from other ETL tools
- Erwin Mapping Manager | code-free source-target ETL mapping/flow generation
- CONNX Drivers | move/manipulate mainframe and other proprietary data
- DataDirect Drivers | move/manipulate big data and cloud/SaaS data
- DW Digest | cloud dashboard for interactive BI
- IRI FACT | parallel unload of Oracle and 6 other VLDB tables to files
- IRI HadoopRuntimes | MapReduce2, Spark, Spark Stream, Storm, and Tezoptions
- IRI Chakra Max | DB activity monitoring, auditing & protection (DAM/DAP)
Whether included with the base Voracity package, or installed as partner technology, everything runs in IRI Workbench and leverages the same open data and manipulation metadata infrastructure for job management and deployment ... inside or outside the GUI.