Master Data Management with Git
Master Data Management (MDM) is a strategic enterprise information management (EIM) life cycle initiative designed to foster the consistency and accurate maintenance of master (or reference) data. Read More
Master Data Management (MDM) is a strategic enterprise information management (EIM) life cycle initiative designed to foster the consistency and accurate maintenance of master (or reference) data. Read More
As database table and index sizes increase, data becomes more fragmented and query response slows. To improve database operating efficiency, regular table reorganization is required. See this article explaining why reorgs matter and the material below detailing the wizard’s use. Read More
The Opening Question
ODBC gets a bad rap for speed sometimes … but should it? You’d think from what’s posted online that ODBC is intrinsically slow:
Microsoft disagrees in the case of SQL Server. Read More
This article is second in a 3-part series on CLF and ELF web log data. We previously explained CLF and ELF web log formats, and now introduce IRI solutions for manipulating and using web log data. Read More
This article is fourth in a 4-part series on managing metadata assets in IRI Workbench using Git. It focuses on the security of your metadata. Other articles in the series cover the use of Git as a metadata asset hub, for version control, and for tracking metadata lineage. Read More
This article is third in a 4-part series on managing metadata assets in IRI Workbench using Git. It focuses on its value in tracking metadata lineage. Read More
This article is second in a 4-part series on managing metadata assets in IRI Workbench using Git. It focuses on its metadata version control. Other articles in the series cover the use of Git: as a metadata asset hub, for tracking metadata lineage, and for metadata security. Read More
This article is first in a 4-part series on managing job-specific metadata assets in IRI Workbench. It focuses on the value of a metadata hub in general, and a Git implementation in particular. Read More
There are a variety of testing requirements for any data warehouse and database — and especially dual platforms like Teradata — where ETL and BI prototypes, application stress testing, and performance benchmarking are essential. Read More
Introduction
The ability to directly move very large database (VLDB) data from/to, and manipulate it within IRI software is essential for those requiring:
data integration (ETL) database acceleration (unloads, loads, reorgs, queries) database migration data masking (encryption, redaction, de-identification, etc.) Read MoreThis is the final article in a three-part blog series introducing IRI’s new data structuring technology. The first blog introduces “dark data” and the unstructured sources the wizard can analyze. Read More