top of page

CRISP-DM and Skylab USA

CRISP_DM and Skylab USA

Alfred J. Nigl, Ph.D. and Dean Grey

Skylab USA one of the world’s most robust, white-labeled social media engagement platforms, created to leverage the principles of the science of engagement is collecting a wide variety of user data and information. Skylab data science department has adopted the CRISP_DM method for organizing and compiling data for analyses and reporting. Based on this ongoing data analysis the following types of reports can be created and distributed to industry professionals and media outlets.Social Media PlatformTotal Monthly Active UsersBrand Engagement %Valuation1Facebook2.13B0.07%$464B2Instagram800M /*24.21%$35B3Slack6M /*30%$5B4Whatsapp1.5B /*40%$20B5Snapchat187M/ *540-60%19.5B6Skylab USA20,30873%

  1. Compared to social media platforms with verifiable brand engagement stats, Skylab USA is outperforming Facebook by 730 times and outperforming Instagram by 16 times.

  2. The reason for these very high brand engagement levels can be explained by Skylab’s adoption of a Value Reinforcement System, based on a modern adaptation of Social Cognitive Learning Theory (see the paper by Nigl and Grey published on Research Gate February 2018).

Skylab is also outperforming most of the apps which have been released in both Apple and Android (Google) stores. The table below shows how Skylab ranks in total downloads as of March 2018.

With a total of 27,989 downloads, Skylab USA is outperforming over 85% of all apps released to date.

In summary, Skylab’s unique VRS system and its highly intelligent gamification platform harnesses the power of social- and self-reinforcement systems to produce very high levels of user engagement and app downloads which place Skylab in the upper decile rankings of both engagement, downloads, and retention.

Data Mining is a critical function that all businesses need to engage in to find and leverage the value of their legacy and current customer data. This function used to be known as Knowledge Discovery and that term is still a good explanatory description for what takes place in data mining.

In 1995, a group of leading data scientists came together for the express purpose of creating a uniform process for conducting data mining independent of the software used and the level of experience of the user. It was designed to be freely available and a recent survey found the over 40% of all data scientists around the world still rely on this process today. The process was named CRISP_DM or Cross Industry Standard Process for Data Mining.

The motivation to create a standardized process included the concern among many data scientists that specialists were dominating the field and their belief in the democratization of data mining. Therefore, if data mining was to be proliferated among various business professionals with no formal data science training, it was considered important to ensure that the data mining process be reliable and repeatable by people with little data mining background.

Secondarily, CRISP_DM also serves as a substitute for the Experimental Method which has its beginnings in traditional physical and social sciences but typically is not formally applied to Data Science.

The graphic below is a representation of the six steps that characterize the CRISP_DM process. The key factors of this model include:

  1. Process Model which focuses on Business needs

  2. Can be applied by non-data scientists

  3. Provides a complete blueprint

  4. Data Mining methodology

  5. Life cycle: 6 phases

The important thing to notice about the graphic above is the fact that the process was designed to be focused on Business, not technology or data science. Business understanding is the first step in the process. In fact, many data scientists and other thought leaders in the field of data science have emphasized the fact that any predictive model that is not developed with a strong understanding of the business, is, in fact, useless and not worth deploying.

The figure above also shows that the CRISP_DM process is not unidirectional, information flows from Business Understanding and then the next step Data Understanding can alter one’s perception of the business and the evaluation of any model created must be validated against the Business Understanding. The outer directional arrows form a complete circle showing that this process can involve many iterations or cycles until an effective predictive model is created and deployed.

Skylab follows the CRISP_DM method for data mining and to help organize and prepare its data for analysis and reporting as shown below.

  1. Skylab Data Mining Outline

Skylab comparison and engagement data is collected and compiled using the CRISP_DM process.

  1. Business Rule 1: All stats and data analyses must be related to the Skylab entire Hierarchy

  2. Skylab

  3. Solar system ( Skylab’s name for re-sellers )

  4. Planet (Skylab Customers)

  5. Experience (each planet may have multiple experiences that users can select)

  6. End user

  7. Business Rule 2: All Data must be able to be presented in dash board and exported

  8. For reports to the CMO and Sales team

  9. For sponsors/investors

  10. For Data Science analysis and reporting

  11. For creating Widgets on our apps

  12. Business Rule 3: Reports should reflect how Skylab clients rank to other social media platforms across the following data dimensions.

  13. Total App Downloads

  14. User Engagement

  15. DAU and MAU totals

  16. DAU% and MAU % compare to the total ( active user base ) which is defined by users who have been active on the platform in the last 6 months ( this time frame may be adjusted )

  17. DAU over MAU

  18. SAU (Super Active Users) is a new metric Skylab is tracking, measuring users who exhibit above average activity over a 30 day period

  19. SEU (Super Engaged Users) is another new metric which Skylab has developed to measure users with extraordinary long “streaks” or consecutive days on the app (mobile or web), something that is not covered specifically by the DAU/MAU statistics

  20. Rank to social media

  21. Rank to skylabs Clients

  22. Brand Engagement % (how Skylab planets compare to other social media platforms)

  23. Rank to other platforms

  24. Rank In skylabs world

  25. Sustained Engagement / Retenton %

  26. Based off Wilson’s Law -the 30-10-10 rule

  27. VRS Index (how users can become better persons)

  28. Actions

  29. Community focused (helping the planet grow and prosper)

  30. Personal Growth (helping the person grow and prosper)

  31. Helping others  (helping other users grow and prosper)

  32. Social Engagement behaviors

  33. Posts

  34. Lessons

  35. Chats


Phase 1. Business Understanding

  1. Statement of Business Objective

  2. Statement of Data Mining Objective

  3. Statement of Success Criteria

The first part of this process is focused on understanding the project objectives and requirements from a business perspective. Once this is accomplished the data scientist and team will transform this knowledge into a data mining problem definition and a preliminary plan designed to achieve the objectives

Determine business objectives

  1. thoroughly understand, from a business perspective, what the client really wants to accomplish; this may entail interviewing the person or team in charge of the project to gain a complete understanding of the specific goals

  2. during the interview process it is important to note the important factors, at the beginning, that could possibly exert an influence on the outcome of the project

Assess situation

Experienced data scientists also engage in additional fact-finding about all of the  factors that should be considered and flesh out the key details as well as key performance indicators (KPIs)

Determine data mining goals

  1. a business goal states objectives in business terminology

  2. a data mining goal states project objectives in technical terms

Ex.) the business goal might be: “Increase sales to existing customers.”

a data mining goal: “Predict how many products a specific customer will buy, given their purchases over the past 12-36 months, demographic information (gender, age, salary, geo-location) and the price of the item.”

Produce project plan

  1. describe the intended plan for achieving the data mining goals and the business goals

  2. the plan should specify the anticipated set of steps to be performed during the rest of the project including an initial selection of tools and techniques

Phase 2. Data Understanding

  1. Explore the Data

  2. Verify the Quality

  3. Find Outliers

This phase starts with the initial data collection and other activities necessary for the data science team to become familiar with the data, steps need to be taken to identify data quality problems, missing values and conduct preliminary analyses like creating histograms, scatter plots and descriptive statistics to discover preliminary insights into the data or to detect interesting subsets to form hypotheses for hidden information.

Collect initial data

  1. work with client IT to provide the data listed in the project resources created in Phase I

  2. data loading, data cleaning and identification of all variables necessary for data understanding and analysis

  3. this phase could lead to initial data preparation steps (Phase II)

  4. if the data is spread across multiple data sources, data integration is an additional issue, either here or in the later data preparation phase

Describe data

  1. examine the basic properties of the acquired data

  2. report on the results

Explore data

once the initial data exploration and descriptive analyses have been completed the core data mining questions can be developed and formally addressed using querying, visualization and reporting including:

summarization of the distribution of key attributes and simple aggregations

identify the relations between pairs or small numbers of attributes

detail the properties of significant sub-populations, simple statistical analyses

  1. these steps may address directly the data mining goals

  2. the outcome of these processes may contribute to or refine the data description and quality reports

  3. they also may feed into the transformation and imputation of data and other data preparation processes needed

Verify data quality

the final step in Phase II is toexamine the quality of the data, addressing questions such as:

“Is the data complete?”, Are there missing values in the data?”

Phase 3. Data Preparation

  1. Takes usually over 80% of the total time of the Data Mining process and includes the following steps

  2. Collection and organization of the data or additional data

  3. Assessment

  4. Consolidation and Cleaning

  5. Data selection

  6. Transformations

This important phase covers all activities to construct the final dataset from the initial raw data.

Data preparation tasks usually occur multiple times and not in any prescribed order. Tasks include creating data tables, recording key elements of the data, attribute selection as well as transformation and cleaning of data for modeling tools.

Select data

  1. the first part of this phase is very important, the data scientist must decide on the data to be used for analysis; generally, not all of the data available is used.

  2. criteria for which data to include and which to exclude include how relevant each data point is to the data mining goals, quality and technical constraints such as limits on data volume or data types. For example, zip codes and telephone numbers are frequently excluded

  3. selection of key attributes as well as selection of important records in a table are often part of this process

Clean data

  1. purpose of cleaning the data is to raise the data quality to the level required by the selected analysis techniques; for example modeling software generally does not work with missing cells but open source methods like R or KNIME have automated imputation processes built in to efficiently fill all cells with data

  2. this process may involve creating clean subsets of the data, the imputation or insertion of suitable defaults or more ambitious techniques such as the estimation of missing data by modeling.

Key Steps in Phase III

  1. Construct data

this includes data construction operations such as the production of derived attributes, entire new records or transformed values for existing attributes.

  1. Integrate data

this involves the application of methods in which key information is combined from multiple tables or records to create new records or values that are useful for modeling.

  1. Format data

formatting data making modifications based on syntax that do not change its meaning, but might be required by the modeling tool; e.g., in logistic regression, continuous data must be re-formatted and transformed into a binary distribution of 0 or 1.

Phase 4. Modeling

  1. The first step is to select the modeling techniques to use on the cleaned and prepared data, usually more than one model is selected based upon the data mining objective.

  2. Build models, usually a family of models is selected which best seem to fit the data (e.g., regression models, classification models, unsupervised machine learning models)

  3. Assess model (rank the models in terms of accuracy)

Various modeling techniques are selected and applied and their parameters are calibrated to optimal values. Some techniques have specific requirements on what type of data can be modeled. Often it is necessary to conduct additional data preparation.

Build model

run the modeling tool on the prepared dataset to create one or more models

Assess model

  1. Data Scientist than interprets the models according to his/her domain knowledge, the data mining success criteria and the desired test design

  2. Additional assessment is performed as part of the model validation procedures which may include:

  3. Lift charts

  4. AUC/ROC measures

  5. Bootstrapping

  6. following the validation phase, the data scientist contacts business analysts and domain experts later in order to discuss the data mining results in the business context

  7. in this step, the data science team generally focuses on the winning models but also the evaluation phase also takes into account all other results that were produced in the course of the project

Phase 5. Evaluation

  1. Evaluation of model

how well it performed on test data

  1. Methods and criteria

depend on model type

  1. Interpretation of model

important or not, easy or hard depends on algorithm

Thoroughly evaluate the model and review the steps executed to construct the model to be certain it properly achieves the business objectives. A key objective is to determine if there is some important business issue that has not been sufficiently considered. At the end of this phase, a decision on the use of the data mining results should be reached

Evaluate results

  1. assesses the degree to which the model meets the business objectives

  2. seeks to determine if there is some business reason why this model is deficient

  3. test the model(s) on test applications in the real application if time and budget constraints permit

  4. also assesses other data mining results generated

  5. unveil additional challenges, information or hints for future directions

Review process

  1. do a more thorough review of the data mining engagement in order to determine if there is any important factor or task that has somehow been overlooked

  2. review the quality assurance issues

  3. ex) “Did we correctly build the model?”

Determine next steps

  1. decides how to proceed at this stage

  2. decides whether to finish the project and move on to deployment if appropriate or whether to initiate further iterations or set up new data mining projects

  3. include analyses of remaining resources and budget that influences the decisions

Phase 6. Deployment

  1. Determine how the results need to be utilized

  2. Who needs to use them?

  3. How often do they need to be used

  4. Deploy Data Mining results by

Scoring a database, utilizing results as business rules,

interactive scoring on-line

The knowledge gained will need to be organized and presented in a way that the customer can use it. However, depending on the requirements, the deployment phase can be as simple as generating a report or as complex as implementing a repeatable data mining process across the enterprise.

Plan deployment

  1. in order to deploy the data mining result(s) into the business, takes the evaluation results and concludes a strategy for deployment

  2. document the procedure for later deployment

Plan monitoring and maintenance

  1. important if the data mining results become part of the day-to-day business and it environment

  2. helps to avoid unnecessarily long periods of incorrect usage of data mining results

  3. needs a detailed on monitoring process

  4. takes into account the specific type of deployment

Produce final report

  1. the project leader and his team write up a final report

  2. may be only a summary of the project and its experiences

  3. may be a final and comprehensive presentation of the data mining result(s)

Review project

  1. assess what went right and what went wrong, what was done well and what needs to be improved

CRISP_DM Applications at Skylab

Skylab recently implemented the CRISP_DM model in order to help organize and process all of the user data it collects and analyzes.

Below is a summary of the user data that is currently being tracked by Skylab

Data Points & Stats

  1. Users Behavioral Profile

  2. Consistency streaks

  3. Best streak

  4. Current streak

  5. Actions stats

  6. Times an User has taken an Action defined on the Actions inventory

  7. Virality stats


  9. Sign Ups

  10. Ripple (this is the total “impact” the User has had by sharing the app or sharing content, bringing people into the platform; it includes the “first generation” of sign ups as well as all generations below to infinity, so Ripple will always be >= Sign Ups)

  11. Tags system

  12. Users get tagged by smart tags

  13. Location

  14. Identity

  15. Performance

  16. User gamification scores

  17. Users have a score that is used to sort the Leaderboard

  18. Users earn points that add to their score by engaging with the app (e.g. liking, sharing, taking actions, posting photos, etc.)

  19. Badges engine tracking both number of actions as well as consistency streaks

  20. A significant number of platform actions (i.e. liking, commenting, sharing, following, chatting, etc.) are being tracked

  21. Any custom Action that is set on the Actions inventory

  22. Consuming content (i.e. completing a Post)

  23. Any Training content (i.e. completed Course X)

  24. Admin Dashboard

  25. Total Users

  26. New Users

  27. MAU

Data that will be tracked in the future, note: CSW stands for Community Stats Widget, a new functionality that is currently in development and will be added to the app in the near future.

  1. Total Users [CSW]

  2. New Users [CSW]

  3. User Engagement

  4. MAU / DAU [CSW]

  5. Average number of sessions per user

  6. Average time per User

  7. iOS/Android App downloads to date

  8. Location

  9. Total number of Countries [CSW]

  10. Total number of Cities [CSW]

  11. Use location tags data to be able to report or display locations on a map

  12. Identity

  13. Planet-level: identity tags at the Planet level (e.g. Gender, Generation, etc.)

  14. Experience-level: TpEs (Tags-per-Experience tags)

  15. Actions taken

  16. Total Actions taken [CSW]

  17. Actions taken by type/orientation:

  18. Socially responsible actions [CSW] —e.g. 123

  19. Personally responsible actions [CSW]

  20. Actions taken taken by category/theme (at Planet level and above) [CSW] —e.g. 657 «Health & Wellness» Actions taken today

  21. Specific Actions taken (at Experience level) [CSW] —e.g. 234 «Action 1: 20′ Workout» taken today on «Experience A»

  22. Channels

  23. Total channels published (does not include drafts)

  24. Channels Followed

  25. Channel unsubscribes

  26. Total Posts published (does not include drafts)

  27. Post stats viewed

  28. Post read (i.e. completed) [CSW]

  29. Training Programs/Categories

  30. Total Training Categories published (does not include drafts)

  31. Total Courses published (does not include drafts)

  32. Total Lessons published (does not include drafts)

  33. Top 10 Courses & Lessons that Users are most engaged with.

  34. Courses Enrolled

  35. Courses Completed [CSW]

  36. Courses Completion Ratio (Courses Completed / Courses Enrolled)

  37. Stats per Categories/Programs (aggregation of Course stats) —Views, Completions, Completions Ratio, Likes, Comments, Shares

  38. Stats per Courses (aggregation of Lesson stats) —Views, Completions, Completions Ratio, Likes, Comments, Shares

  39. Stats per Lessons —Views, Completions, Completions Ratio, Likes, Comments, Shares

  40. Community activity




  44. User Photos (e.g. Selfies) posted on the RW [CSW]

  45. Badges won – time period

  46. Chat

  47. Total Messages Sent

  48. Total Users interacting via Chat

  49. Revenue / Income Generated through Web Payments and/or In-app Purchases

  50. Total Revenue: Day, wk, month,year, to date ( or what every the filter allows )

  51. Channels subscriptions revenue

  52. Training one-time payments revenue

  53. Refunds from one-time payments (not subscriptions)

NOTE: stat names with «[CSW]» at the end means that it’s a stat that should be available to be displayed on the Community Stats Widget on the Home Screen. These will be stats that the End User could see, if the Admin chooses to display them on the Community Stats Widget.

Skylab Data Mining Set Up (Influenced by CRISP_DM)

Stats per Data Point

For each of the data points, the following must be available:

  1. Current value (e.g. Total Users)

  2. Delta or % increase/decrease (e.g. Total Users increased 5% in the last month)

  3. Ability to display current value and delta according to specific time frames: day / week / month / quarter / year

  4. Data collected and displayed must aggregate from User behavior on all platforms: iOS + Android + Web

  5. Data “resolution” should be down to the minute (at the very least to the hour). Data must be able to be tracked and reported at least on an hourly basis, if not on a minute-by-minute frequency.

  6. For the purpose of reporting and data mining, Actions added to the Actions Inventory must be segmented by:

  7. Responsibility:

  8. Personal Responsibility

  9. Social Responsibility

  10. Categories (i.e. Topic/Theme the Action relates to)

  11. Personal Growth

  12. Health / Wellness

  13. Education / Training

  14. Community Development

  15. Share

  16. Finances / Wealth

  17. Biz Development

  18. Nonprofit / Social Activism

  19. Other

  20. IMPORTANT: Built-in Platform actions (i.e. Liking, Commenting, Sharing, etc.) are to be considered and tracked as «Socially Responsible» actions.

Skylab Genesis Process

CRISP_DM is also used to guide the Genesis process, the process of developing a new customer app and platform (or the new planet). The most important application of CRISP_DM is to understand the overall business model and business purpose. Along with this understanding of the business purpose and goals is the importance of understanding what types of behaviors the customers (i.e., the planet owner) wants their users to engage in while on the app. Skylab’s Genesis team uses the first 3 phases of the CRISP_DM model to make sure that they understand what each business owner (i.e., planet owner) wants to accomplish with the planet from a business process perspective and also what user behaviors will be positively reinforced, modeled and shaped.

References on CRISP-DM

Shearer C., The CRISP-DM model: the new blueprint for data mining, J Data Warehousing (2000); 5:13—22.

Gregory Piatetsky-Shapiro (2002); KDnuggets Methodology Poll

Gregory Piatetsky-Shapiro (2007); KDnuggets Methodology Poll

Óscar Marbán, Gonzalo Mariscal and Javier Segovia (2009); A Data Mining & Knowledge Discovery Process Model. In Data Mining and Knowledge Discovery in Real Life Applications, Book edited by: Julio Ponce and Adem Karahoca, ISBN 978-3-902613-53-0, pp. 438–453, February 2009, I-Tech, Vienna, Austria.

Lukasz Kurgan and Petr Musilek (2006); A survey of Knowledge Discovery and Data Mining process models. The Knowledge Engineering Review. Volume 21 Issue 1, March 2006, pp 1–24, Cambridge University Press, New York, NY, USA doi: 10.1017/S0269888906000737.

Azevedo, A. and Santos, M. F. (2008); KDD, SEMMA and CRISP-DM: a parallel overview. In Proceedings of the IADIS European Conference on Data Mining 2008, pp 182–185.

Have you seen ASUM-DM?, By Jason Haffar, 16 October 2015, SPSS Predictive Analytics, IBM

Harper, Gavin; Stephen D. Pickett (August 2006). “Methods for mining HTS data”Drug Discovery Today11 (15–16): 694–699. doi:10.1016/j.drudis.2006.06.006PMID 16846796.

Pete Chapman (1999); The CRISP-DM User Guide.

Pete Chapman, Julian Clinton, Randy Kerber, Thomas Khabaza, Thomas Reinartz, Colin Shearer, and Rüdiger Wirth (2000); CRISP-DM 1.0 Step-by-step data mining guides.

Colin Shearer (2006); First CRISP-DM 2.0 Workshop Held

References on social media platform engagement statistics (p.1)

*1- .24B Billion est. for Total Users/2.13B MAU/ MAU%-


bottom of page