Por: Pedro Castellares, con estudios en Machine Learning en el Massachusetts Institute Technology y Transformación Digital en la Universidad de California Berkeley.AbstractIBM defines Data Science as the process of describing (extracting) hidden knowledge (insights) from massive amounts of structured and unstructured data, using methods such as statistics, machine learning, data mining and predictive analytics. It is a multidisciplinary area that is changing the way organizations solve problems and gain competitive advantages.CRISP-DM (Figure 1), which stands for Cross Industry Standard Process for Data Mining, is a proven method used to guide data mining work.ν As a methodology, it includes descriptions of the normal phases of a project, the tasks required in each phase and an explanation of the relationships between the tasks.ν As a process model, CRISP-DM provides an overview of the data mining life cycle.The Data Science life cycle, explained by IBM, includes about 5 to 16 continuous processes that are overlapping. Depending on who you ask, the number of processes varies, the most popular are as follows:ν Capture: Consists of the collection of raw data, from any source and entered by any method. The data can be structured or unstructured, the sources need only be relevant, and data entry can be almost any method - from manual entry, to web scraping, to collecting data from systems and equipment in real time. ν Prepare and maintain: This involves putting raw data in a consistent format for processing via analytics, machine learning or deep learning. This process can include cleaning, removing duplicates, re-formatting data, using ETL (extract, transform, load) or other integration technologies to combine that data into a data warehouse, data lake or other unified warehouse type for analysis. ν Pre-process or process: Data scientists examine biases, patterns, ranges, and distributions of values within data to determine how sustainable they are for use in productive analytics, machine learning, deep learning algorithms, or other analytical methods. ν Analyze: This is where discoveries occur. Data scientists perform statistical and production analysis, regression, machine and deep learning algorithms, and more to extract information from previously prepared data. ν Communication: Finally, the insights discovered are presented in the form of reports, graphs and other types of data visualization that convert these insights and their impact on the business into a more easily understood representation for stakeholders. A Data Science programming language such as ''R'' or ''Python'' includes components to generate visualizations; alternatively Data Scientists can use dedicated visualization tools.In this technical work, Machine Learning algorithms were used to determine the limits of interferents (%Mg, %Tox, %Carbonates and %Clays) contained in an ore and their impact on copper recovery, using the following scheme (Figure 2) provided by MIT (Massachusetts Institute of Technology), which is based on the philosophy of the predictive model: "The future of the past is the future of the future". It is then assumed that knowledge of past data will allow prediction of future data.