Focus on the Steak, not the Table Stakes, of Data Integration

If you are a vendor who needs access to data feeds or are an IT leader responsible for providing data feeds, you undoubtedly think about how the world would look if you had some of the basics of integration improved. These basics can be considered "table stakes" for any project or product.

After talking to many vendors and IT leaders about new models of integration, flat file integration and data compliance are clear "table stakes". The number of flat files (CSV, etc.) data feeds from data sources like the SIS, ERP or LMS is a clear pain point, and uncertainty on data compliance slows things down and reduces the ability to innovate (the "steak"). Often, flat file extract processes are deeply embedded into internal processes and staff responsibilities and little is known about the data sensitivity of these flat file extracts.

Data sensitivity impacts FERPA compliance, and FERPA compliance differs from data feed to data feed. So how do you track the data sensitivity of multiple data feeds? What types of tools would be needed to share more data using APIs with vendors and have a clear sense of data sensitivity?

In the latest Lingk release, we've introduced the easiest to use interface for importing flat (or delimited) files from SFTP to the Lingk Data Engine. Once in the Lingk Data Engine, a data sensitivity score is provided, and you can use the API platform to access the data.

This interface is designed to easily solve the iterative process of trying to get extracted files to import with zero failed rows. Lingk improves data integration workflow in the following ways:

  1. The moment a file is dropped onto our SFTP or WebDav endpoints, it is immediately imported. No need to set up polling or scheduling.

  2. Once the file is imported, you have access to our Bulk Upload user interface designed to help you know what happened now, and you can easily compare it to similar imports that happened recently. You can see all the imports in context with your current work, without moving between clunky user interfaces designed for a single import.

  3. Many systems (including Lingk) require relational data to be imported, and it can be hard to sequence the files correctly. Instead of importing one file at a time and programming the correct sequence, now you can put them all in a zip file, and Lingk will correctly sequence the files.

  4. The validation of failed rows is the same validation that occurs when using the APIs. So, while you are fixing errors in the flat file, you are also learning the basics of how the API will respond to the same data.

  5. Each Connected App in the Lingk platform receives FTP endpoints,  removing the need for weird file naming conventions to prevent accidental file overwrites from multiple systems.

 The Bulk Upload interface simplifies the iterative improvements needed to import clean data. 

The Bulk Upload interface simplifies the iterative improvements needed to import clean data. 

Once the data is imported into the Lingk Data Engine, it is scored using your data field rules and weights. Since each data set is scored using the same rules, you have a clear view of the data in a Connected App without having to even look at a row.

 Data Sensitivity scoring provides an easy view into compliance.

Data Sensitivity scoring provides an easy view into compliance.

With these improvements, Lingk makes it easier to focus on the "steak" of innovation, by providing improved integration "table stakes". Now creating API data feeds from flat files in a controlled way is not only easy but actually kind of fun. Sign up for a demo to learn more!