Yieldigo Integration Manual
12 min
version 03/2026 your input is valuable to us please share any feedback or suggestions regarding the integration manual at mailto\ projectdelivery\@yieldigo com implementation process the implementation of yieldigo is divided into several consecutive phases each phase has a defined purpose, deliverables, and responsibilities for both the client and yieldigo this structured approach ensures that data quality, integration processes, alignment of pricing strategy, and user adoption are addressed step by step the overall project duration depends on the quality and complexity of the client’s data, as well as the required data transformations from both sides the timeline is usually estimated during the data mapping session in the discovery phase project phases discovery phase client provides all core datasets and, if available, optional datasets (these enhance user experience and full tool functionality) up to three iterations of data quality feedback identifying issues, adjusting structure/format, and defining required data transformations data cleaning by the client removing duplicates, fixing inaccuracies, and resolving missing values accurate data is required for smooth repricing integration phase data implementation yieldigo performs simple data mapping (merging columns, calculating values, and renaming headers) complex transformations (business logic in scripts and handling inconsistent structures) are out of scope and may incur additional costs daily data flow setup configuring import/export of files in an agreed format, structure, and schedule to ensure correct data availability data validation implementing regular checks to detect and resolve data gaps or errors data finalization last adjustments and preparation of environment configuration documentation pricing strategy setup initial tool training, pricing strategy workshops, and training materials provided solution settings and application configuration aligned with business goals and pricing strategy validation of data and environment architecture to confirm consistency with business requirements handover phase end to end data cycle test to verify seamless flow across client and yieldigo systems final fine tuning of pricing solution to meet goals and rules q\&a session to prepare key users for daily operations go live start of regular repricing in the production environment hypercare & support period of enhanced support from the customer success manager (csm) to ensure a smooth transition from onboarding to regular operations various https //supportservices yieldigo com/ are offered for ongoing support and long term success input and output data integration the objective of data integration is to create a streamlined and automated workflow, consisting of three key stages true 197,197,197#d8e5f5 unhandled content type #d8e5f5 unhandled content type #d8e5f5 unhandled content type unhandled content type unhandled content type unhandled content type infrastructure and data formats data storage and exchange data storage acts as the central channel for exchanging information between the client and yieldigo the client provides input data, which yieldigo’s algorithms process to deliver output data in the form of computed prices to ensure secure and reliable data exchange, yieldigo recommends one of the following approaches, all based on the sftp (ssh file transfer protocol) yieldigo hosted sftp server yieldigo creates and maintains an sftp server the client receives the necessary credentials the client provides a list of ip addresses for whitelisting, ensuring that only authorized connections are allowed client hosted sftp server the client sets up and manages their own sftp server with write permissions the client provides yieldigo with access credentials or a certificate the server must be accessible via a public ip address (vpn based access is not supported in the standard integration) this option gives the client greater control over their data while still maintaining a secure connection with yieldigo alternative storage options if preferred, data exchange can also be set up using azure files , amazon s3 , or keboola data format for data exchange, the system uses the widely adopted https //en wikipedia org/wiki/comma separated values (comma separated values) format csv is supported by most software platforms and is both human readable and machine readable, making it a simple, transparent, and reliable choice for exchanging data between the client and yieldigo encoding in https //en wikipedia org/wiki/utf 8 column endings items are separated by a semicolon (;) line endings rows are terminated by lf code (standard unix format) quoting text values are surrounded by inverted commas (") escape a string for inverted commas in case the text of an item contains inverted commas, these inverted commas are doubled ("") null values ( null ) represent an empty string (without inverted commas) date a date is provided as yyyy mm dd (according to https //en wikipedia org/wiki/iso 8601 ) decimals a dot decimal notation is used (3 34 is right while 3,34 is wrong) header the first row contains the column names no capital letters and no diacritics are used spaces within a column name are represented by an underscore ( ) the columns' order and names are table specific (e g , store id, product id) identifiers for all entities may contain only alphanumeric characters (a–z, a–z, 0–9) without spaces or special symbols import dataset requirements to ensure a seamless implementation and an optimal environment setup, datasets should be provided in the structure and format described in the mandatory and optional dataset sections mandatory dataset files that are required to build and initialize the yieldigo instance optional dataset files that are not strictly required but strongly recommended they enrich the user experience and enable advanced features such as attribute based filtering and product family creation true 197,197,197#d8e5f5 unhandled content type #d8e5f5 unhandled content type #d8e5f5 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type
