Jan Sonck @ DISUMMIT 2017: “Mobile network data, a lever for augmented contextual insights”

DiS17_Speaker_09:00_Jan Sonck

Jan Sonck is the Head of Enterprise Innovation at Proximus, the largest Belgian telecommunications company, where he has also been in charge of Indirect and Multi-Channel Solutions, Marketing and Business Development. Jan is also an experienced marathon runner.

About Jan and how mobile data is being used today

Being a such a large telecommunication company as Proximus gives place to a large set of opportunities to use data, and that is what Jan came to share with us.

The fact of having an increasing  number of people using mobile devices, sketching their digital fingerprint, triggered the collaboration of academia with the telecomm companies to perform joint research offering an overview of the qualitative and quantitative elements the user is considering in his daily life.

Following academia, the collaboration has been extended to other players, such as governmental institutions (at all levels: from municipal to European) and management consulting firms.

These synergies have led to an increasing accuracy on crowd analytics displaying trajectories and new types of visualization (in 2D and 3D). This also involves fusing multiple data sources,  and promoting the importance of open data. The result of this is using mobile data for prediction on a real time basis in different contexts, ranging from large-scale events (for instance, music festivals), air quality, and mobility, among many others.

In a nutshell, collaboration should keep on permeating all sectors, keeping citizens in the loop with the way their data is being used.

A takeaway from Jan’s presentation

“The increasing complexity of data requires to have a deeper sense of collaboration”

 

We look forward to meet Jan soon so he can tell us how collaboration towards mobile data is evolving!

Jan’s interview:

Jan’s presentation recording:

Jan’s deck:

Jan’s presentation drawing:

DIS17_JanSonck

Nurturing your Data Scientists : 70 years of accumulated experience at your service!

The Data Science community is proud to announce the entrance in of a startup dedicated to data scientists coaching and nurturing: WeLoveDataScience

Data scientist… Where are you? Do you really exist?

That’s the question many managers do currently face. Data Scientists is a scarce 5-legs sheeps: difficult to find, difficult to hire and difficult to keep.

 

logo_WeLOveDataScience_width2000

 

WeLoveDataScience is a brand new business unit, hosted at the European Data Innovation Hub, dedicated to searching, selecting, training, coaching and nurturing data science talents. The Belgian market does not propose enough candidates: we will take care to train next-gen data scientists for you.

Whatever your projects are, we propose to prepare for you the data scientist(s) you need, following these 7 steps:

  1. Together we prepare a detailed job description corresponding to your real needs: is this about data analysis, basic queries, reporting, data mining, big data or new technologies?
  2. We identify candidates on the market in particular through close collaborations with Belgian universities (Ghent, ULB/VUB, UCLouvain…)
  3. You hire the right candidate!
  4. He/she assists a 12 weeks data science bootcamp , including: high-level overview, data science for business, technical stack (SAS, R, Python…), introduction to specific technologies/topics (NoSQL or graph databases, social network analysis, text-mining..)
  5. Then he/she works for you at our offices during 4 tot 6 month, coached by one of our expert. On the job coaching on real projects: your projects but also hackathons, technology workshops, meetups…
  6. After those 10 intensive months, (s)he is ready to work for you on site. (S)He will demonstrate his/her knowledge by giving a course on a specific topic and/or writing entries in specialised blogs, giving a presentation at a conference…
  7. We assist you in yearly evaluation and follow-up.

Our intentions for 2016 are to help companies create and develop data science teams and to build a data science culture… And WeLoveDataScience: this is 70 years of accumulated experience at your disposal!

Want to know more? Visit: www.welovedatascience.com , send an email to info@welovedatascience.com or simply fill in this contact form. We will visit and explain what we do and brainstorm about your specific needs.

 

The essence of Predictive Analytics for Managers – Executive Training

Creating more business value from investments in Big Data and Predictive Analytics through better project definition, project management and better usage of Predictive Analytics projects.

Target audience

Managers of analytical teams, Managers of functional departments (marketing, risk, operations, HR,…), Project managers, CxO.

Details

  • Duration: One afternoon workshop (4h):
    • December 3rd, 2015
  • Location: European Data Innovation Hub @ AXA, Vorstlaan 23, 1170 Brussel
  • Price: 570€ per manager
  • Limited from 8 – 12 participants

Registration:

Please register using Eventbrite following this link.

Motivation:

Fueled by the energy around Big Data projects, an increasing number of managers are attracted to the domain of Big Data and advanced analytics. When successfully applied, analytics provides the key to turn data into big value. Typical goals of such high-impact projects involve:

(i) increase targeted marketing success by predicting response

(ii) increase marketing relevance by offering the right product to the right client,

(iii) decrease risk exposure by predicting credit or fraud risk,

(iv) increase process efficiency,

(v) retain crucial staff members, etc.

Overview:

This training provides a backbone for managing projects in Predictive Analytics that maximally impact the organisation. Put differently, in this workshop we ensure participants reap the maximum return on their analytical investments. Additionally, this training establishes the foundation for fruitful collaboration between analysts, their peers and decision makers.

The workshop is designed as an interactive learning experience packed with best practices illustrated with real domain experience.

Learning objectives:

After the training, participants will be able to define and manage developments in Predictive Analytics. In practice participants will be able to

  • Explain the crucial phases needed to engage in projects in predictive analytics
  • Define a project in predictive analytics in detail, using a concise project definition checklist
  • Understand the essential principles for predictive analytics and why they matter to management
  • Understand the requirements and limitations of predictive analytics
  • Fully understand the output of predictive analytics to increase their impact on the organizational goals

 

Prerequisites:

Before the start of the first session, participants should attempt to define a practical and relevant project within their organisation. At completion of the workshop, it is the aim that managers will be able to further define and understand the steps needed to manage this challenge to success.

 Geert Verstraeten

About the trainer:

Geert Verstraeten (PhD) is a dynamic trainer with a solid background both in predictive analytics and in professional training. Geert has 14 years of experience in building predictive models for organisations in a wide array of industries. Additionally, he has 14 years of academic and industry experience
in training and coaching managers and analysts to succeed with Predictve Analytics. Since 2006, he is managing partner at Python Predictions (www.pythonpredictions.com), a Brussels-based niche player in Predictive Analytics, and was involved in building analytical communities both in Belgium and abroad. Geert is a frequent speaker at academic and business conferences in analytics. Since 2014, Geert is a certified professional trainer.

Certification:

Attendees receive an electronic version of the handouts and a proof of participation at the conclusion of the workshop.

Blog – Predictive Analytics – a Soup Story by Geert Verstraeten

geert 1brasserie octopus

Predictive Analytics – a Soup Story

A simple metaphor for projects in predictive analytics 

By: Geert Verstraeten, Predictive Analytics advocate, Managing Partner and Professional Trainer, Python Predictions

The analytical scene has recently been dominated by the prediction that we would soon experience an important shortage of analytical talent. As a response, academic programs and massive open online courses (MOOCs) have sprung up like mushrooms after the rain, all with the purpose of developing skills for the analyst or its more modern counterpart, the data scientist. However, in the original McKinsey article, the shortage of analytics-oriented managers was predicted to become ten times more important than the shortage of analysts[1]. But how do we offer relevant concepts and tools to managers without drowning our ‘sweet victims’ in technology and jargon?

For managers, most analytics training falls short in a critical way. The vast majority of newfound analytics training focuses on core analytics algorithms and model building, not on the organizational process needed to apply it. In my opinion, the single most important tool for any manager lies in understanding the process of what should be managed. The absolute essence when asked to supervise predictive analytical developments lies in having a solid understanding of the main project phases. Obviously, we are not the first to realize that this is vital. Tools have been developed to describe the process methodology for developing predictive models[2]. However, it is difficult for non-experts to become excited about these tools, as they describe phases in a rather dry way.

We have experimented with different ways to present process methodology in a more fun and engaging way. Today, we no longer experiment. In our meetings and trainings with managers, we present the development of analytical models as simple as the process of making soup in a soup bar.

Project definition

geert phase 1  This first phase is concerned with understanding the organization’s needs, priorities, desires and resources. Taking the order basically means we should start by carefully exploring what it is that we need to predict. Do we want to predict who will leave our organization in the next year, and if so – how will we define this concretely? At this time, when the order becomes clear, it is time to check the stock to make sure we will be able to cook the desired dish. This is equivalent to checking data availability. Additionally, it is important to have an idea about timing: will our client need to leave timely in order to catch the latest movie? This is pretty similar to drawing a project plan.

Data preparation

geert phase 2The second phase deals with preparing all useful data in a way that they are ready to be used subsequently in the analysis. For those not familiar with (French) cooking jargon, mise en place is a term used in professional kitchens to refer to organizing and arranging the ingredients (e.g. freshly chopped vegetables, spices, and other components) that a cook will require for his shift[3]. Data are for predictive analytics what ingredients are for making soup. In predictive analytics, data are gathered, cleaned and often sliced and diced such that they are ready to be used in a later analytical stage.

Model building

gert phase 3The main task in cooking the soup lies in choosing exactly those ingredients that blend into a great result. This is no different in predictive modeling, where the absolute essence lies in selecting those variables that are jointly capable of predicting the event of interest. One does not make a great soup with only onions. Obviously, not only the presence of ingredients is relevant, also the proportions in which they are used – compare this to the parameters of predictors: not every predictor is equally important for obtaining a high quality result. Finally, cooking techniques matter just as much as algorithms do in predictive analytics – they represent essentially different ways to combine the same data into the best soup.

Model validation

geert phase 4In cooking it is crucial to taste a dish before it is served. This is very similar to model validation in predictive model building. Both technical and business relevant measures can be used to objectively determine whether a model built on a specific data set will hold true for new data. As long as the soup does not taste well, we can iterate back to cooking, until the final soup is approved – i.e. the champion model is selected.

Model usage

geert phase 5This phase is all about presentation and professional serving. A great soup served in an awful bowl may not be fully appreciated. The same holds true for predictive models – a model with fantastic performance may fail to convince potential users when key insights are missing. Drawing a colorful profile of the results may prove instrumental in convincing the audience of the model’s merit. If done successfully, this will likely result in an in-field experiment, for example designing a set of retention campaigns targeting those with the highest potential to leave. At that point, the engaged analyst should check in whether the meal is enjoyed.

Conclusion
title

This simple, intuitive process has been important to us to allow managers to engage in the process in a fun way. Presenting the process in a non-technical way makes the process digestible (to be fair, I’ve stolen this phrase from my friend Andrew Pease, Global Practice Analytics Lead at SAS because it makes such great sense in this context). However, it should remain clear that it is only a metaphor. At some point, building predictive models is obviously also different that making soup. Every phase, especially project definition, involves many more components than those where a link with soup can be found. But the metaphor gets us where we want to be – a point where a discussion is possible on what is needed to develop predictive models, and where a minimum of trust can exist: it ensures that we get on speaking terms with decision makers and all those who will be impacted by the models developed.

Notes and further reading

brasserie octopusWe fully realize this is not completely different from CRISP-DM, the Cross Industry Standard Process for Data Mining, which has been developed in 1996, and is still the leading process methodology used by 43% of analysts and data scientists. However, except if you are a veteran and/or an analyst, it is difficult to get really excited about CRISP-DM or its typical visualization. For those looking for a more in-depth understanding of the process, I recommend reading the modern answer to CRISP-DM, the Standard Methodology for Analytical Models (by Olav Laudy, Chief Data Scientist, IBM).

[1] In a previous post, we have also argued that the analytics-oriented manager is main lever for success with predictive analytics.

[2] for the sake of clarity: a predictive model is a representation of the way we understand a phenomenon – or if you will, a formulaic way to combine predictive information in a way to optimally predict future behavior or events.

[3] see the Wikipedia definition of mise en place

About Geert

Geert VerstraetenGeert Verstraeten is Managing Partner at Python Predictions, a niche player in the domain of Predictive Analytics. He has over 10 years of hands-on experience in Predictive Analytics and in training predictive analysts and their managers. His main interest lies in enabling clients to take their adoption of analytics to the next level. His next training will be organised in Brussels on October 1st 2015.

 

Gratitude goes to Eric SiegelAndrew Pease and our team at Python Predictions for delivering great suggestions on an earlier version of this article. All remaining errors are my own.

Link to the next training details from Geert.

Video