Analytics: Lessons Learned from Winston Churchill

chrurchill

I had the pleasure to be invited for lunch by Prof. Baessens earlier this week and we talked about a next meetup subject that could be ‘War and Analytics’. As you might know Bart  is a WWI fanatic and he has already written a nice article on the subject called ‘Analytics: Lessons Learned from Winston Churchill’

here is the article—

Nicolas Glady’s Activities

Activities Overview‎ > ‎Online articles‎ > ‎ Analytics: Lessons Learned from Winston Churchill

Analytics has been around for quite some time now.  Even during World War II, it proved critical for the Allied victory. Some famous examples of allied analytical activities include the decoding of the enigma code, which effectively removed the danger of submarine warfare, and the 3D reconstruction of 2D images shot by gunless Spitfires, which helped Intelligence at RAF Medmenham eliminate the danger of the V1 and V2 and support operation Overlord. Many of the analytical lessons learned at that time are now more relevant than ever, in particular those provided by one of the great victors of WWII, then Prime Minister, Sir Winston Churchill.

The phrase “I only believe in statistics that I doctored myself” is often attributed to him. However, while its wit is certainly typical of the Greatest Briton, it was probably a Nazi Propaganda invention. Even so, can Churchill still teach us something about statistical analyses and Analytics?

 

A good analytical model should satisfy several requirements depending upon the application area and follow a certain process. The CRISP-DM, a leading methodology to conduct data-driven analysis, proposes a structured approach: understand the business, understand the data, prepare the data, design a model, evaluate it, and deploy the solution. The wisdom of the 1953 Nobel Prize for literature can help us better understand this process.

Have an actionable approach: aim at solving a real business issue

Any analytics project should start with a business problem, and then provide a solution. Indeed, Analytics is not a purely technical, statistical or computational exercise, since any analytical model needs to be actionable. For example, a model can allow us to predict future problems like credit card fraud or customer churn rate. Because managers are decision-makers, as are politicians, they need “the ability to foretell what is going to happen tomorrow, next week, next month, and next year… And to have the ability afterwards to explain why it didn’t happen.” In other words, even when the model fails to predict what really happened, its ability to explain the process in an intelligible way is still crucial.

In order to be relevant for businesses, the parties concerned need first to define and qualify a problem before analysis can effectively find a solution. For example, trying to predict what will happen in 10 years or more makes little sense from a practical, day-to-day business perspective: “It is a mistake to look too far ahead. Only one link in the chain of destiny can be handled at a time.”  Understandably, many analytical models in use in the industry have prediction horizons spanning no further than 2-3 years.

Understand the data you have at your disposal

There is a fairly large gap between data and comprehension. Churchill went so far as to argue that “true genius resides in the capacity for evaluation of uncertain, hazardous, and conflicting information.”  Indeed, Big Data is complex and is not a quick-fix solution for most business problems. In fact, it takes time to work through and the big picture might even seem less clear at first. It is the role of the Business Analytics expert to really understand the data and know what sources and variables to select.

Prepare the data

Once a complete overview of the available data has been drafted, the analyst will start preparing the tables for modelling by consolidating different sources, selecting the relevant variables and cleaning the data sets. This is usually a very time-consuming and tedious task, but needs to be done: “If you’re going through hell, keep going.”

Never forget to consider as much past historical information as you can. Typically, when trying to predict future events, using past transactional data is very relevant as most of the predictive power comes from this type of information. “The longer you can look back, the farther you can look forward.”

read more here

The ABC of Datascience blogs – collaborative update

abc-letters-on-white-sandra-cunningham

A – ACID – Atomicity, Consistency, Isolation and Durability

B – Big Data – Volume, Velocity, Variety

C – Columnar (or Column-Oriented) Database

  • CoolData By Kevin MacDonell on Analytics, predictive modeling and related cool data stuff for fund-raising in higher education.
  • Cloud of data blog By Paul Miller, aims to help clients understand the implications of taking data and more to the Cloud.
  • Calculated Risk, Finance and Economics

D – Data Warehousing – Relevant and very useful

E – ETL – Extract, transform and load

F – Flume – A framework for populating Hadoop with data

  • Facebook Data Science Blog, the official blog of interesting insights presented by Facebook data scientists.
  • FiveThirtyEight, by Nate Silver and his team, gives a statistical view of everything from politics to science to sports with the help of graphs and pie charts.
  • Freakonometrics Charpentier, a professor of mathematics, offers a nice mix of generally accessible and more challenging posts on statistics related subjects, all with a good sense of humor.
  • Freakonomics blog, by Steven Levitt and Stephen J. Dubner.
  • FastML, covering practical applications of machine learning and data science.
  • FlowingData, the visualization and statistics site of Nathan Yau.

G – Geospatial Analysis – A picture worth 1,000 words or more

H – Hadoop, HDFS, HBASE

  • Harvard Data Science, thoughts on Statistical Computing and Visualization.
  • Hyndsight by Rob Hyndman, on fore­cast­ing, data visu­al­iza­tion and func­tional data.

I – In-Memory Database – A new definition of superfast access

  • IBM Big Data Hub Blogs, blogs from IBM thought leaders.
  • Insight Data Science Blog on latest trends and topics in data science by Alumnus of Insight Data Science Fellows Program.
  • Information is Beautiful, by Independent data journalist and information designer David McCandless who is also the author of his book ‘Information is Beautiful’.
  • Information Aesthetics designed and maintained by Andrew Vande Moere, an Associate Professor at KU Leuven university, Belgium. It explores the symbiotic relationship between creative design and the field of information visualization.
  • Inductio ex Machina by Mark Reid’s research blog on machine learning & statistics.

J – Java – Hadoop gave it a nice push

  • Jonathan Manton’s blog by Jonathan Manton, Tutorial-style articles in the general areas of mathematics, electrical engineering and neuroscience.
  • JT on EDM, James Taylor on Everything Decision Management
  • Justin Domke blog, on machine learning and computer vision, particularly probabilistic graphical models.
  • Juice Analytics on analytics and visualization.

K – Kafka – High-throughput, distributed messaging system originally developed at LinkedIn

L – Latency – Low Latency and High Latency

  • Love Stats Blog By Annie, a market research methodologist who blogs about sampling, surveys, statistics, charts, and more
  • Learning Lover on programming, algorithms with some flashcards for learning.
  • Large Scale ML & other Animals, by Danny Bickson, started the GraphLab, an award winning large scale open source project

M – Map/Reduce – MapReduce

N – NoSQL Databases – No SQL Database or Not Only SQL

O – Oozie – Open-source workflow engine managing Hadoop job processing

  • Occam’s Razor by Avinash Kaushik, examining web analytics and Digital Marketing.
  • OpenGardens, Data Science for Internet of Things (IoT), by Ajit Jaokar.
  • O’reilly Radar O’Reilly Radar, a wide range of research topics and books.
  • Oracle Data Mining Blog, Everything about Oracle Data Mining – News, Technical Information, Opinions, Tips & Tricks. All in One Place.
  • Observational Epidemiology A college professor and a statistical consultant offer their comments, observations and thoughts on applied statistics, higher education and epidemiology.
  • Overcoming bias By Robin Hanson and Eliezer Yudkowsky. Present Statistical analysis in reflections on honesty, signaling, disagreement, forecasting and the far future.

P – Pig – Platform for analyzing huge data sets

  • Probability & Statistics Blog By Matt Asher, statistics grad student at the University of Toronto. Check out Asher’s Statistics Manifesto.
  • Perpetual Enigma by Prateek Joshi, a computer vision enthusiast writes question-style compelling story reads on machine learning.
  • PracticalLearning by Diego Marinho de Oliveira on Machine Learning, Data Science and Big Data.
  • Predictive Analytics World blog, by Eric Siegel, founder of Predictive Analytics World and Text Analytics World, and Executive Editor of the Predictive Analytics Times, makes the how and why of predictive analytics understandable and captivating.

Q – Quantitative Data Analysis

R – Relational Database – Still relevant and will be for some time

  • R-bloggers , best blogs from the rich community of R, with code, examples, and visualizations
  • R chart A blog about the R language written by a web application/database developer.
  • R Statistics By Tal Galili, a PhD student in Statistics at the Tel Aviv University who also works as a teaching assistant for several statistics courses in the university.
  • Revolution Analytics hosted, and maintained by Revolution Analytics.
  • Rick Sherman: The Data Doghouse on business and technology of performance management, business intelligence and datawarehousing.
  • Random Ponderings by Yisong Yue, on artificial intelligence, machine learning & statistics.

S – Sharding (Database Partitioning)  and Sqoop (SQL Database to Hadoop)

  • Salford Systems Data Mining and Predictive Analytics Blog, by Dan Steinberg.
  • Sabermetric Research By Phil Burnbaum blogs about statistics in baseball, the stock market, sports predictors and a variety of subjects.
  • Statisfaction A blog by jointly written by PhD students and post-docs from Paris (Université Paris-Dauphine, CREST). Mainly tips and tricks useful in everyday jobs, links to various interesting pages, articles, seminars, etc.
  • Statistically Funny True to its name, epidemiologist Hilda Bastian’s blog is a hilarious account of the science of unbiased health research with the added bonus of cartoons.
  • SAS Analysis, a weekly technical blog about data analysis in SAS.
  • SAS blog on text mining on text mining, voice mining and unstructured data by SAS experts.
  • SAS Programming for Data Mining Applications, by LX, Senior Statistician in Hartford, CT.
  • Shape of Data, presents an intuitive introduction to data analysis algorithms from the perspective of geometry, by Jesse Johnson.
  • Simply Statistics By three biostatistics professors (Jeff Leek, Roger Peng, and Rafa Irizarry) who are fired up about the new era where data are abundant and statisticians are scientists.
  • Smart Data Collective, an aggregation of blogs from many interesting data science people
  • Statistical Modeling, Causal Inference, and Social Science by Andrew Gelman
  • Stats with Cats By Charlie Kufs has been crunching numbers for over thirty years, first as a hydrogeologist and since the 1990s, as a statistician. His tagline is- when you can’t solve life’s problems with statistics alone.
  • StatsBlog, a blog aggregator focused on statistics-related content, and syndicates posts from contributing blogs via RSS feeds.
  • Steve Miller BI blog, at Information management.

T – Text Analysis – Larger the information, more needed analysis

U – Unstructured Data – Growing faster than speed of thoughts

V – Visualization – Important to keep the information relevant

  • Vincent Granville blog. Vincent, the founder of AnalyticBridge and Data Science Central, regularly posts interesting topics on Data Science and Data Mining

W – Whirr – Big Data Cloud Services i.e. Hadoop distributions by cloud vendors

X – XML – Still eXtensible and no Introduction needed

  • Xi’an’s Og Blog A blog written by a professor of Statistics at Université Paris Dauphine, mainly centred on computational and Bayesian topics.

Y – Yottabyte – Equal to 1,000 exabytes, 1 million petabytes and 1 billion terabytes

Z – Zookeeper – Help managing Hadoop nodes across a distributed network

Feel free to add your preferred blog in the comment bellow.

Other resources:

Nice video channels:

More Jobs ?

hidden-jobs1

Click here for more Data related job offers.
Join our community on linkedin and attend our meetups.
Follow our twitter account: @datajobsbe

Improve your skills:

Why don’t you join one of our  #datascience trainings in order to sharpen your skills.

Special rates apply if you are a job seeker.

Here are some training highlights for the coming months:

Check out the full agenda here.

Join the experts at our Meetups:

Each month we organize a Meetup in Brussels focused on a specific DataScience topic.

Brussels Data Science Meetup

Brussels, BE
1,417 Business & Data Science pro’s

The Brussels Data Science Community:Mission:  Our mission is to educate, inspire and empower scholars and professionals to apply data sciences to address humanity’s grand cha…

Next Meetup

DATA UNIFICATION IN CORPORATE ENVIRONMENTS

Wednesday, Oct 14, 2015, 6:30 PM
57 Attending

Check out this Meetup Group →

Blog – Predictive Analytics – a Soup Story by Geert Verstraeten

geert 1brasserie octopus

Predictive Analytics – a Soup Story

A simple metaphor for projects in predictive analytics 

By: Geert Verstraeten, Predictive Analytics advocate, Managing Partner and Professional Trainer, Python Predictions

The analytical scene has recently been dominated by the prediction that we would soon experience an important shortage of analytical talent. As a response, academic programs and massive open online courses (MOOCs) have sprung up like mushrooms after the rain, all with the purpose of developing skills for the analyst or its more modern counterpart, the data scientist. However, in the original McKinsey article, the shortage of analytics-oriented managers was predicted to become ten times more important than the shortage of analysts[1]. But how do we offer relevant concepts and tools to managers without drowning our ‘sweet victims’ in technology and jargon?

For managers, most analytics training falls short in a critical way. The vast majority of newfound analytics training focuses on core analytics algorithms and model building, not on the organizational process needed to apply it. In my opinion, the single most important tool for any manager lies in understanding the process of what should be managed. The absolute essence when asked to supervise predictive analytical developments lies in having a solid understanding of the main project phases. Obviously, we are not the first to realize that this is vital. Tools have been developed to describe the process methodology for developing predictive models[2]. However, it is difficult for non-experts to become excited about these tools, as they describe phases in a rather dry way.

We have experimented with different ways to present process methodology in a more fun and engaging way. Today, we no longer experiment. In our meetings and trainings with managers, we present the development of analytical models as simple as the process of making soup in a soup bar.

Project definition

geert phase 1  This first phase is concerned with understanding the organization’s needs, priorities, desires and resources. Taking the order basically means we should start by carefully exploring what it is that we need to predict. Do we want to predict who will leave our organization in the next year, and if so – how will we define this concretely? At this time, when the order becomes clear, it is time to check the stock to make sure we will be able to cook the desired dish. This is equivalent to checking data availability. Additionally, it is important to have an idea about timing: will our client need to leave timely in order to catch the latest movie? This is pretty similar to drawing a project plan.

Data preparation

geert phase 2The second phase deals with preparing all useful data in a way that they are ready to be used subsequently in the analysis. For those not familiar with (French) cooking jargon, mise en place is a term used in professional kitchens to refer to organizing and arranging the ingredients (e.g. freshly chopped vegetables, spices, and other components) that a cook will require for his shift[3]. Data are for predictive analytics what ingredients are for making soup. In predictive analytics, data are gathered, cleaned and often sliced and diced such that they are ready to be used in a later analytical stage.

Model building

gert phase 3The main task in cooking the soup lies in choosing exactly those ingredients that blend into a great result. This is no different in predictive modeling, where the absolute essence lies in selecting those variables that are jointly capable of predicting the event of interest. One does not make a great soup with only onions. Obviously, not only the presence of ingredients is relevant, also the proportions in which they are used – compare this to the parameters of predictors: not every predictor is equally important for obtaining a high quality result. Finally, cooking techniques matter just as much as algorithms do in predictive analytics – they represent essentially different ways to combine the same data into the best soup.

Model validation

geert phase 4In cooking it is crucial to taste a dish before it is served. This is very similar to model validation in predictive model building. Both technical and business relevant measures can be used to objectively determine whether a model built on a specific data set will hold true for new data. As long as the soup does not taste well, we can iterate back to cooking, until the final soup is approved – i.e. the champion model is selected.

Model usage

geert phase 5This phase is all about presentation and professional serving. A great soup served in an awful bowl may not be fully appreciated. The same holds true for predictive models – a model with fantastic performance may fail to convince potential users when key insights are missing. Drawing a colorful profile of the results may prove instrumental in convincing the audience of the model’s merit. If done successfully, this will likely result in an in-field experiment, for example designing a set of retention campaigns targeting those with the highest potential to leave. At that point, the engaged analyst should check in whether the meal is enjoyed.

Conclusion
title

This simple, intuitive process has been important to us to allow managers to engage in the process in a fun way. Presenting the process in a non-technical way makes the process digestible (to be fair, I’ve stolen this phrase from my friend Andrew Pease, Global Practice Analytics Lead at SAS because it makes such great sense in this context). However, it should remain clear that it is only a metaphor. At some point, building predictive models is obviously also different that making soup. Every phase, especially project definition, involves many more components than those where a link with soup can be found. But the metaphor gets us where we want to be – a point where a discussion is possible on what is needed to develop predictive models, and where a minimum of trust can exist: it ensures that we get on speaking terms with decision makers and all those who will be impacted by the models developed.

Notes and further reading

brasserie octopusWe fully realize this is not completely different from CRISP-DM, the Cross Industry Standard Process for Data Mining, which has been developed in 1996, and is still the leading process methodology used by 43% of analysts and data scientists. However, except if you are a veteran and/or an analyst, it is difficult to get really excited about CRISP-DM or its typical visualization. For those looking for a more in-depth understanding of the process, I recommend reading the modern answer to CRISP-DM, the Standard Methodology for Analytical Models (by Olav Laudy, Chief Data Scientist, IBM).

[1] In a previous post, we have also argued that the analytics-oriented manager is main lever for success with predictive analytics.

[2] for the sake of clarity: a predictive model is a representation of the way we understand a phenomenon – or if you will, a formulaic way to combine predictive information in a way to optimally predict future behavior or events.

[3] see the Wikipedia definition of mise en place

About Geert

Geert VerstraetenGeert Verstraeten is Managing Partner at Python Predictions, a niche player in the domain of Predictive Analytics. He has over 10 years of hands-on experience in Predictive Analytics and in training predictive analysts and their managers. His main interest lies in enabling clients to take their adoption of analytics to the next level. His next training will be organised in Brussels on October 1st 2015.

 

Gratitude goes to Eric SiegelAndrew Pease and our team at Python Predictions for delivering great suggestions on an earlier version of this article. All remaining errors are my own.

Link to the next training details from Geert.

Video

PWC study is out – Digital transformation becomes key objective for CEO

CEOs no longer question the need to embrace technology at the core of their business in order to create value for customers, but 58% still see the rapid pace of technological change as a challenge. So we learn from the 18th annual PwC CEO survey. It’s based on 1,322 interviews with CEOs in 77 countries, conducted between September and December 2014.

The majority of CEOs think that digital technologies have created high value for their organizations in areas like data and data analytics, customer experience, digital trust and innovation capacity. Surprisingly, however, most CEOs point to operational efficiency as the area where they have seen the best return on digital investment.

US-CEO-Survey-Digital-Technology-graphic

 

This is certainly good news and what’s driving the acceleration of the digital transformation of all businesses.

For more, check out these survey reports

Next step ?

If you want to see how Belgian companies are embracing this digital transformation, join us on for the Data Innovation Summit in Brussels on March 26th.

 

Startup Launch – Predicube – MORE EFFECTIVE ONLINE ADVERTIZING

logo

 

 

  • Stop sending all your money to facebook and google, spend it more wisely and get far better results from a Belgian #datascience startup offering a more secure and safer solution for Online advertising.
  • Invest in more effective targeted online advertising while respecting consumers’ privacy concerns

 

 David Martens  crowdPhilippe Degueldre Prof. Foster Provost    Book-cover

Nice crowd yesterday for the on  the top floor of the  Boerentoren (KBCstartit) for the Launch of Predicube, a startup founded by David Martens.

We had 3 presentations:

  • Welcome and introduction by David Martens
  • Philippe Degueldre, director business intelligence at Pebble Media
  • Prof. Foster Provost, Professor at NYU’s SternBusiness School and Co-founder of several big data companies

70% of every euro that is redirected from print to online advertising currently floats out of the local economy

 

Online advertising is big business, but spamming consumers with random ads has proven not to be the best way to optimize advertising returns: people only click on ads that are relevant to them. Hence, media companies and advertisers have been exploring targeted advertising strategies to boost online ads’ click-through rates (and revenues) based on an analysis of people’s Internet surfing patterns. An approach that conflicts with consumers’ stringent privacy concerns?

Not anymore, thanks to Belgian tech starter PrediCube – a spin-off from digital research center iMinds and University of Antwerp, and supported by the Start it @KBC incubator. PrediCube uses big data analytics to make sure consumers get to see those ads that are truly of interest to them, thereby increasing click-through rates up to 300%, while putting its unique ‘privacy by design’ strategy center stage.

Analyzing and predicting consumers’ online behavior to increase click-through rates up to 300%

Online adverteren is big business. Online advertising is big business. In Europe alone, online ad spending topped €27 Billion in 2013 (a YoY increase of 11,9%). Yet, while this big market potential offers a great deal of opportunities, the shift to online advertising also comes with a number of important concerns. The authors of the book Het nieuwe TV-kijken1 found, for instance, that 70% of every euro that is redirected from print to online advertising currently floats out of the local economy – right into the hands of a few big international players such as Google and Facebook.

Trying to counter that drain of resources and valuable consumer data, PrediCube now brings to market a solution that can be used by local media advertising companies to predict consumers’ interest in specific ads – based on an analysis of their online behavior. Result: targeted online advertising campaigns that are much more efficient and generate higher revenues.

“Using PrediCube, we have been able to increase targeted online ads’ click-through rates up to 300%. In other words, we are now able to match ads with the right consumer profiles up to 3 times more accurately. Moreover, we want to significantly increase the inventory’s volume based on socio-demographic criteria (such as age and gender). Thanks to PrediCube we will be able to do this in the very near future. It goes without saying that this approach will positively impact our business and product offering,” says Philippe Degueldre, director business intelligence at Pebble Media, managing online advertising for 80 premium websites – such as VRT, Telenet, RTBF, Viacom, Elle and LinkedIn.

Strong focus on ‘privacy by design’

Dealing adequately with consumers’ privacy concerns is a major focus area for the PrediCube team; hence they are investing a great deal of effort in their ‘privacy by design’ approach.

“PrediCube works by means of cookies,” explains Prof. dr. David Martens, co-founder of PrediCube and Assistant Professor at the Faculty of Applied Economics, University of Antwerp. “First of all, web pages that use the PrediCube customer behavior prediction tool will ask users’ explicit consent to use those cookies. If the cookies are not accepted, no behavior tracking will take place.”

“Secondly, a number of privacy safeguards have been put in place,” David Martens continues. “Users are automatically ‘forgotten’ after 30 days, their online behavior is only tracked on premium web pages – not across the whole of the Internet – and their data is never sold to other parties.”

PrediCube: bringing together the best in research and entrepreneurship

PrediCube builds on the outcomes of the DiDaM project, a collaborative research effort under the banner of iMinds Media. DiDaM investigated ways of analyzing media users’ Internet sessions in real-time and identifying patterns to help advertisers integrate relevant ads into web pages (also in real-time). Objective: providing consumers with just those ads that are relevant to them. DiDaM’s research findings and the expertise from the Applied Data Mining research center of the University of Antwerp together laid the foundation of PrediCube. The PrediCube team can also count on the support of the Start it @KBC incubator – providing them with business guidance and office space.

 

About PrediCube

PrediCube is a new tech startup that uses advanced big data technology to predict which online users are interested in a product, allowing targeted advertising on premium websites. Or how a spinoff company of University of Antwerp and iMinds is ready to go head to head with Facebook and Google to compete for online advertising budgets. PrediCube results from an iMinds Media project that ended in 2014, investigating the potential of data for improved advertising (together with partners Concentra, Pebble Media, AdHese and KU Leuven). PrediCube has been co-founded by prof. David Martens, who heads the Applied Data Mining research group at the University of Antwerp (faculty of Applied Economic Sciences), and whose research focuses on the development and application of data mining algorithms. PrediCube was founded in October 2014, and is part of the Start it @KBC incubator. Current customers include Batibouw, Engels Ramen, Verandaland and Triple Living.

Contact

Press coverage about this lauch:

Published Press articles about Brussels DataScience Community

hot-off-the-press-nugget

Articles and Interviews have been published in:

regionalIT  logo-technologium  canalz_logo KanaalZlogo2014 new europe studio

 

By following Journalists:

Brigitte-Doucet_avatar-230x230 Antoine Smets Fabienne Lamot Stijn Wuyts

Here are some articles published in the press about the Brussels Data Science Community.

 

TV Interviews

 

BICC congres 26 maart 2015 – Digitale disruptie? Business Intelligence slaat terug!

image003

Hier heb je de laatste nieuwsbrief van Thomas More.

BICC congres 26 maart 2015 – Digitale disruptie? Business Intelligence slaat terug!

8ste BI congres van het BICC-Thomas More: 26 maart 2015
Donderdag 26 maart 2015 – 16u tot 20u
Campus De Ham, Raghenoplein 21bis, 2800 Mechelen
Deelnemen is GRATIS, inschrijven verplicht.

Met deze werktitel staat de 8ste editie van het congres bovenal in het teken van de strategische rol van BI.  Organisaties worden heden ten dage enerzijds overspoeld door een tsunami aan data terwijl anderzijds de verwachte deadlines tot ontsluiting van inzichten steeds krapper en krapper worden gesteld.  Koppel deze uitdaging in vele gevallen ook nog aan een erfenis van suboptimale rapporteringsprojecten en het mag duidelijk zijn dat een efficiënte en effectieve aanpak van informatiemanagement een speerpunt dient te zijn in een modern beleidsproces.

We leggen momenteel de laatste hand aan het programma maar lichten alvast een tipje van de sluier met geconfirmeerde bijdragen van oa. Jörgen Jacob (Managing Director van Fit IT), Philip Allegaert (Director Big Data & Analytics bij Keyrus), Tobias Temmink (Business Development Manager bij Teradata Benelux), Bart Maertens (Managing Partner know.bi) en Dries Van Nieuwenhuyse (thought leader op het gebied van technologie en statistieken van de besluitvormingsprocessen).

Tot slot.  Deelname in dit onafhankelijk event is nog steeds gratis.

Aarzel dus niet en schrijf u hier in: http://bicc.thomasmore.be/drupal7/congres2015

Performance MANAGEMENT 3.0 – De evidentie zelf

Eind vorig jaar organiseerde het BICC voor het derde jaar op rij het Performance MANAGEMENT event.  Dit event is gegroeid vanuit het afstudeertraject Informatiemanagement met de bedoeling om een platform te bieden waarop onze laatstejaars studenten hun ervaringen en lessons learned kunnen delen en aftoetsen met het bedrijfsleven.

Ook voor deze editie mochten we opnieuw een gevatte vertegenwoordiging van bedrijven actief binnen het domein van Informatiemanagement verwelkomen die de poster pitch presentaties zeer wisten te smaken.

Het thema van deze editie, “Hoe Performance MANAGEMENT daadwerkelijk embedden in het DNA van een organisatie”,  was dan ook ambitieus en illustratief voor de sterk veranderde omgeving waarbinnen kenniswerkers zich momenteel bevinden.   Benieuwd naar meer? Bekijk dan hier de presentaties:http://www.slideshare.net/BICCThomasMore/performance-management-30-de-evidentie-zelf

Next Generation Security

IT moet nieuwe security strategieën ontwikkelen om de risico’s te beperken en intelligentie toepassen om de organisatie en haar activa te beschermen door middel van nieuwe analytics, innovatie, en een systematische aanpak van security.

De bescherming van persoonsgegevens is, uiteraard, een deel van de taak van een IT security expert. Volgens Peter Berghmans,  Data Protection Officer en expert in Privacy, pleit dat het privacy risico integraal deel uitmaakt van het ‘cyber threat program’.

“De volgende grote trend voor IT-experts is het nadenken over de zogenaamde ‘privacy by design’ – het creëren van producten / diensten die de juiste balans houden tussen wat de consument van een product krijgt, en de privacy die ze daarvoor ‘prijsgeeft’ (niet altijd bewust). Dit is de zogenaamde veiligheid paradox.

Meer lezen:

http://www.corporate-leaders.com/index.cfm/page:it-leaders/id:cio-vision-next-generation-security-executive-summary

Compilation – NG Data -50 Business Intelligence Blogs You Should Be Reading

ngdata-logo2

 

We are included in the list of 50 of the most valuable BI blogs on the web 

Hi Philippe, 

I work with NGDATA, a customer experience management solutions company that enables organizations to maximize the value of their customer relationships through its breakthrough solution, Lily EnterpriseTM. I wanted to reach out and let you know that we’ve just released our list of “50 Business Intelligence Blogs You Should Be Reading,” and you’ve made the list.

Congratulations! You can see the full list here: http://www.ngdata.com/top-business-intelligence-blogs/

(Note: The list is in random order; the blogs are not necessarily ranked or rated in order of quality or importance. The aim is to recognize some of the best, most useful blogs for staying in the loop on all things Business Intelligence among the many BI and Analytics blogs on the web.)
        
It would be great if you’d share this news internally and/or with your audience on social media. You can find us on Twitter @ngdata_com.

Thanks so much for your time, and congratulations again!

Angela

Angela Stringfellow

NGDATA​

 

49. The Brussels Data Science Community
@DataScienceBe

 

Brussels-Data-Science-Commu

The fastest-growing community of data scientists in Europe, The Brussels Data Science Community is a European knowledge hub for all things Big Data and data science. The community organizes events, shares knowledge, and conducts training to bridge the gap between academics and business through the value of analytics.

Three posts we like from The Brussels Data Science Community: 

 

1 hour to speed up your Datascience

Excellent story from our friends from Bigboards.

bigboardsio

Last week we were entertained by a professor from the States. Jared Lander (@jaredlander) joined us for a project we currently are involved in. Apart from being a really cool guy, he also gave some ideas of what a DataScience Tint for the hex could look like.

To give you some context, the first thing you need to know about Jared is that he is the author of “R for everyone” (get the book, it is really awesome). So we suspected that the first recommendation he would make for the tint would be to install R onto it. Needless to say that we put that one on top of the list.

Since nor Wim nor I are data scientists (allthough Wim is making up for that in a burning pace) we did have some questions on what we could actually put into a tint like this. The whole concept of…

View original post 151 more words

Gartner’s CEO Survey Predicts Top Technology Investments For Next Five Years

Gartner-logo

Gartner’s CEO Survey Predicts Top Technology Investments

 

In 2014, CEOs are most relying on digital marketing, e-commerce, customer experience management, business analytics and cloud business to improve performance over the next five years. These five technology areas are expected to deliver the most business value by CEOs and senior executives through 2019.

The report 2014 Gartner CEO and Senior Executive Survey: ‘Risk-On’ Attitudes Will Accelerate Digital Business written Mark Raskino, provides additional insights into these investment areas. You can find the 2013 edition of the study, CEO and Senior Executive Survey 2013: As Uncertainty Recedes, the Digital Future Emerges, written by Mark Raskino and Jorge Lopez, published on March 25, 2013 here (free PDF, no opt-in).

The following graphic, posted by Tiffani Bova, Vice President and Distinguished Analyst at Gartner, provides an overview of the top technology investments from 2014 to 2019.

Gartner top-technology-investments1

Saying You Are Customer-Centric Is No Longer A Mantra, It’s Mandatory

 

It’s understandable that these five technology areas receive the highest priority in terms of their business value, as together they galvanize an enterprise around attracting and keeping customers. Taken together they also provide the CEO and senior executives with a more accurate view of how their selling and service strategies are attracting new customers and keeping existing ones.