TOP 6 – 2h Technical workshops during #DISUMMIT

Technical Workshops di-Summit

In addition to the series of management workshops and over 40 presentations, we’d like to bring to your attention that in addition to our excellent line-up of speakers, you may be interested in one of our technical workshops that provide hands-on training:

  1. Pieter Buteneers – Data Strategist and Machine Learning Consultant  – Hands-on Machine Learning with Python
  2. Geert Verstraeten – Managing Partner at Python Managing – Projects in Predictive Analytics
  3. Thierry Turpin and Stephane Tridetti: Optimize DHS with AWS
  4. Faisal Orakzai & Morad Masnaoui: BigData Architecture
  5. Nicolas Deruytter – Managing Director at DataTonic – Hands-on  Workshop  Using  Tensorflow
  6. IBM: Watson Data Platform –  Ann-Elise Delbecq Niko Tambuyser Willem Hendriks Luc Goossens
  • Please check disummit.com form more exciting presentations.

These are only some of the technical workshops we have available.

Thank you to our trainers for sharing their knowledge with us: Pieter Buteneers, Geert Verstraeten, Thierry Turpin, Stéphane Tridetti, Faisal Orakzai, Morad Masnaoui, Nicolas Deruytter, Ann-Elise Delbecq, Niko Tambuyser, Willem Hendriks & Luc Goosens

 

Big Data and Ethics Meetup – Call for ideas/speakers.

ethicsPierre-Nicolas Schwab, the Big Data / CRM manager at RTBF (French-speaking public television) will organize in December a conference on ethical aspects of Big Data in the broadcasting industry. Although this conference will be reserved for members of the European Broadcasting Union (www.ebu.ch), Pierre-Nicolas would be interested to share views with Hub’s members on the topic of Big Data and Ethics.

I think Big Data and Ethics is a topic of importance that we’ve insufficiently covered until now during the diSummit nor our Meetups. I’d like our community  to contribute to this topic through the organization of a special meetup on November 17th. I am asking interested members to contact me if they want to contribute to this event in the form of presentation.

Specific topics could be:

  • Good and bad practices as far as data usage is concerned
  • Examples of Big Data cases that triggered negative reactions
  • The consumer’s perspective: does sharing data with third parties add value for the customer or for the firm?
  • Implicit limits existing in your industry as far as the use of customers’ data is concerned
  • Paradigm shifts and possible unethical changes caused by Big Data modelling in your industry
  • Changes brought by IoT and possible threats to privacy and ethics
  • Measurement of intrusiveness / perception of intrusiveness by users
  • Forward-looking cross-industry trends that may have a negative impact on customers’ perceptions of Big Data

I welcome expressions of interest for this topic and would be glad to organize a preparatory meeting with interested speakers. Pierre-Nicolas has already proposed to host this meeting at RTBF to have fruitful discussions.

Thank you in advance for your answers.

Philippe


Previous post on this topic: https://datasciencebe.com/2016/08/16/next-executive-session-what-about-a-debate-on-big-data-ethics/

Job – external consultant – R & R-Shiny expertise – Brussels

R Shiny

Ref FL-Rshiny: Freelance, 3 months starting ASAP in Brussels.

For one of our Innovation Partner in Brussels we are looking for an external consultant, specialist in R code and R-Shiny.

The project aims at finding the communes where we have a profitable growth potential, and highlighting the brokers with who we should work more to reach our objectives.

One important aspect of this project is the visualisation of those information on a map. We choose the language R-Shiny to develop it but our knowledge of this language is perfectible and we would like to add new functionalities.

The consultant we are looking for must have a strong knowledge of R, R-Shiny, R-Leaflet and if possible Java Script. He will also need good pedagogical skills. Indeed, his mission won’t be to program alone, but rather to help upgrading the knowledge of the team through a good cooperation with them while programming. At last, the working language will be English.

Example of functionalities already in the tool:

  • Map showing the location of the different point of sales with some key features inside popup
  • Table summarizing some key variables about the communes and the point of sales + possibility to download those tables in an excel format
  • Value box, inside or outside the map, giving information about the main KPI of the geographical zone
  • Possibility to zoom on some strategic geographical zones on the map

Apply:

Make sure that you are a member of the Brussels Data Science Community linkedin group before you apply. Join  here.

Please note that we also manage other vacancies that are not public, if you want us to bring you in contact with them too, just send your CV to datasciencebe@gmail.com .

Please send your offer to pvanimpe@dihub.eu with ref FL-Rshiny.

How to innovate in the Age of Big Data presented by Stephen Brobst

stephen3

Executive Summer Session

Stephen Brobst will be at the European Data Innovation Hub. We asked him to share his views on the importance of open data, open source, analytics in the cloud and data science. Stephen is on the forefront of the technology and we can’t wait to hear what is happening in the Silicon Valley. Count on it that you will leave the workshop inspired and weaponed with some actionable ideas that will help us to define a profitable strategy for the data science teams.

Format of the session :

  • 15:00 – Keynote:How to innovate in the Age of Big Data
  • 15:50 – Open Discussion on “Sustainable Strategies for Data Science, tackling following topics:
  • Data Science is the Key to Business Success
  • Three Critical Technologies Necessary for Big Data Exploitation
  • How to Innovate in the Age of Big Data
  • 16:45 – Networking Session

Stephen Brobst is the Chief Technology Officer for Teradata Corporation.  Stephen performed his graduate work in Computer Science at the Massachusetts Institute of Technology where his Masters and PhD research focused on high-performance parallel processing. He also completed an MBA with joint course and thesis work at the Harvard Business School and the MIT Sloan School of Management.  Stephen is a TDWI Fellow and has been on the faculty of The Data Warehousing Institute since 1996.  During Barack Obama’s first term he was also appointed to the Presidential Council of Advisors on Science and Technology (PCAST) in the working group on Networking and Information Technology Research and Development (NITRD).  He was recently ranked by ExecRank as the #4 CTO in the United States (behind the CTOs from Amazon.com, Tesla Motors, and Intel) out of a pool of 10,000+ CTOs.

Analytics: Lessons Learned from Winston Churchill

chrurchill

I had the pleasure to be invited for lunch by Prof. Baessens earlier this week and we talked about a next meetup subject that could be ‘War and Analytics’. As you might know Bart  is a WWI fanatic and he has already written a nice article on the subject called ‘Analytics: Lessons Learned from Winston Churchill’

here is the article—

Nicolas Glady’s Activities

Activities Overview‎ > ‎Online articles‎ > ‎ Analytics: Lessons Learned from Winston Churchill

Analytics has been around for quite some time now.  Even during World War II, it proved critical for the Allied victory. Some famous examples of allied analytical activities include the decoding of the enigma code, which effectively removed the danger of submarine warfare, and the 3D reconstruction of 2D images shot by gunless Spitfires, which helped Intelligence at RAF Medmenham eliminate the danger of the V1 and V2 and support operation Overlord. Many of the analytical lessons learned at that time are now more relevant than ever, in particular those provided by one of the great victors of WWII, then Prime Minister, Sir Winston Churchill.

The phrase “I only believe in statistics that I doctored myself” is often attributed to him. However, while its wit is certainly typical of the Greatest Briton, it was probably a Nazi Propaganda invention. Even so, can Churchill still teach us something about statistical analyses and Analytics?

 

A good analytical model should satisfy several requirements depending upon the application area and follow a certain process. The CRISP-DM, a leading methodology to conduct data-driven analysis, proposes a structured approach: understand the business, understand the data, prepare the data, design a model, evaluate it, and deploy the solution. The wisdom of the 1953 Nobel Prize for literature can help us better understand this process.

Have an actionable approach: aim at solving a real business issue

Any analytics project should start with a business problem, and then provide a solution. Indeed, Analytics is not a purely technical, statistical or computational exercise, since any analytical model needs to be actionable. For example, a model can allow us to predict future problems like credit card fraud or customer churn rate. Because managers are decision-makers, as are politicians, they need “the ability to foretell what is going to happen tomorrow, next week, next month, and next year… And to have the ability afterwards to explain why it didn’t happen.” In other words, even when the model fails to predict what really happened, its ability to explain the process in an intelligible way is still crucial.

In order to be relevant for businesses, the parties concerned need first to define and qualify a problem before analysis can effectively find a solution. For example, trying to predict what will happen in 10 years or more makes little sense from a practical, day-to-day business perspective: “It is a mistake to look too far ahead. Only one link in the chain of destiny can be handled at a time.”  Understandably, many analytical models in use in the industry have prediction horizons spanning no further than 2-3 years.

Understand the data you have at your disposal

There is a fairly large gap between data and comprehension. Churchill went so far as to argue that “true genius resides in the capacity for evaluation of uncertain, hazardous, and conflicting information.”  Indeed, Big Data is complex and is not a quick-fix solution for most business problems. In fact, it takes time to work through and the big picture might even seem less clear at first. It is the role of the Business Analytics expert to really understand the data and know what sources and variables to select.

Prepare the data

Once a complete overview of the available data has been drafted, the analyst will start preparing the tables for modelling by consolidating different sources, selecting the relevant variables and cleaning the data sets. This is usually a very time-consuming and tedious task, but needs to be done: “If you’re going through hell, keep going.”

Never forget to consider as much past historical information as you can. Typically, when trying to predict future events, using past transactional data is very relevant as most of the predictive power comes from this type of information. “The longer you can look back, the farther you can look forward.”

read more here

Launching the first Data Science Bootcamp in Europe

We are so happy to launch the first European  data science bootcamp

It is so nice to write this page on the launch of the first European data science bootcamp that will start this summer in Brussels. This initiative will boost the digital transformation effort of each company by allowing them to improve their data skills either by recruiting trainees and young graduates or transforming existing BI teams to become experienced business data scientists.

Intense 5+12 weeks approach to focus on practical directly applicable business cases.

The content of this bootcamp originated from the Data Science Community. Following the advice of our academic,  innovation and training partners we have decided to offer a unique hands-on 5 + 12 weeks approach.

  1. We call the first 5 weeks the Summer Camp (starts Aug 16th).  The participants work onsite or remote on e-learning MOOCs from DataCamp to demonstrate their ability to code in Python, R, SAS, SQL and to master statistical principles. During this period experts put all their energy into coaching the candidates in keeping up the pace and finishing the exercises. All the activities take place in our training centre located in the European Data Innovation Hub.
    -> If you are a young graduate you can expect to be contacted by tier one companies who will offer you a job or traineeship that will start with the participation to the datascience bootcamp.
  2. The European Data Science Bootcamp starts September 19thDuring a 12 week period – every Monday and Tuesday – participants will work on 15 different business cases presented by business experts from different industries and covering diverse business areas. Each Friday, the future data scientists will gather to work on their own business case, with coaching by our data experts to achieve an MVP (Minimum Viable Product) at the conclusion of the bootcamp.

Delivering strong experienced business data science professionals after an intense semester of hands-on business cases.

Companies are invited to reserve seats for their own existing staff or for the young graduates who have expressed interest in following the bootcamp.

 Please reserve your seat(s) now as, this bootcamp is limited to 15 participants.

Please contact Nele Coghe on training@di-academy.com or click on di-Academy to learn more information about this first European Data Science Bootcamp.

  • Here  is the powerpoint presentation explaining the Bootcamp.
  • Here is the presentation done by Nele during the Data Innovation Summit.

Hope to see you soon at the Hub,

Philippe Van Impe
pvanimpe@di-academy.com

Data Science Meetup about the Panama Papers with Mar Cabra

putin

There was a big crowd attending inspiring talk by Mar Cabra from ICIJ last Thursday at the Data Science Meetup at the VUBrussels.

mar cabraMar gave a whole new meaning to Messi data. This data was originally obtained from an anonymous source by reporters at the German newspaper Süeddeustche Zeitung, who asked ICIJ to organize a global reporting collaboration to analyze the files.

More than 370 reporters in nearly 80 countries probed the files for a year. Their investigations uncovered the secret offshore holdings of 12 world leaders, more than 128 other politicians and scores of fraudsters, drug traffickers and other criminals whose companies had been blacklisted in the US and elsewhere.

Here is the link to her presentation.

The data is available and can be downloaded ! users are now able to search through the data and visualize the networks around thousands of offshore entities, including, when available, Mossack Fonseca’s internal records of the company’s true owners. The interactive database  also includes information about more than 100,000 additional companies that were part of the 2013 ICIJ Offshore Leaks investigation.

Try it yourself and download the database:

Some interesting links:

We were very happy that she could come to do this for our community. Although we did not record the presentation, here are two videos for info:

How the ICIJ Used Neo4j to Unravel the Panama Papers – Mar Cabra
https://www.youtube.com/watch?v=S20XMQyvANY
(very similar to last night, from GraphConnect Europe in London on 26th April);

The Making of a Scoop – The Panama Papers (W.Krach,
Süddeutsche Zeitung & K.Auletta) | DLDnyc 16
https://www.youtube.com/watch?v=_Yfq1gwAQZE

 

 

 

The Panama Papers is a global investigation into the sprawling, secretive industry of offshore that the world’s rich and powerful use to hide assets and skirt rules by setting up front companies in far-flung jurisdictions.
Based on a trove of more than 11 million leaked files, the investigation exposes a cast of characters who use offshore companies to facilitate bribery, arms deals, tax evasion, financial fraud and drug trafficking.
Behind the email chains, invoices and documents that make up the Panama Papers are often unseen victims of wrongdoing enabled by this shadowy industry.

#DIS2016 – Le rendez-vous annuel des acteurs du monde de la Data Science

logo_v5La Transformation Digitale, c’est le thème principal du Data Innovation Summit 2016: l’événement qui réunit l’écosystème belge du domaine du traitement des données.

Rendez-vous le mardi 10 mai 2016 au 25 Boulevard du Souverain – 1170 Watermael Boisfort pour une journée rythmée par plus de 40 présentations d’expertes, 20 stands d’entreprises, et de nombreuses opportunités de networking.

Le rendez-vous annuel des acteurs du monde de la Data Science

Le succès de la première édition du Data Innovation Summit a motivé les organisateurs à pérenniser le Data Innovation Summit.

« L’écosystème de la Data Science belge avait besoin d’un rassemblement qui permette à ses différents membres de se connaître et d’échanger. C’est ainsi qu’est né le Data Innovation Summit » explique Philippe Van Impe – Fondateur du European Data Innovation Hub, qui organise l’événement.

Rassembler, apprendre et s’amuser sont les maîtres mots qui ont réunit l’année dernière, tous les acteurs du monde de la Data Science: Startups et grandes entreprises ainsi qu’universitaires et membres du gouvernement. Cette année encore, ces trois mots attirent et plus de 300 personnes sont attendues au Summit.

Une édition sous le signe de la Transformation Digitale et de l’inspiration au féminin

L’économie d’aujourd’hui évolue très rapidement. L’innovation est un pilier essentiel de ce changement et il est impératif pour les acteurs économiques de ne pas rater le train de la Transformation Digitale en marche. Ce constat s’applique évidemment au domaine du traitement des données.

Défi supplémentaire, cette année, la majorité des experts présentant leurs histoires au Summit seront des expertes. « L’année dernière, sur 68 speakers nous avions 3 femmes. Cette année, nous avons décidé d’inverser la donne pour montrer que les femmes sont aussi impliquées dans ce monde qui parait essentiellement masculin » affirme Philippe Van Impe.

Une programmation riche et inspirante

L’événement est articulé en plusieurs temps forts :

  • Des expertes du Data et de la Transformation digitale viendront parler de leur expérience: découvrez l’histoire de Saskia Van Uffelen – CEO d’Ericsson, d’Ursula Dongmo – ICT women of the year 2016 ou encore d’Annemie Depuydt – CIO de l’université de Louve-la-Neuve.
  • Un hall d’exposition : où vous pourrez discuter avec des entreprises mais aussi tester un casque de réalité virtuelle ou encore découvrir des œuvres d’art connectées.
  • Des networking breaks : grâce à Conversation Starter, une startup Bruxelloise, les visiteurs du #DIS2016 pourront programmer leur venue et organiser des rendez-vous sur la journée.

Programme complet de l’événement : http://disummit.com/program-main-track/

Contact & organisation

  • Brussels Data Science Community
  • Philippe Van Impe
  • 0477 23 78 42
  • pvanimpe@gmail.com
  • Website : www.disummit.com
  • Twitter: @DataEu
  • Facebook group: European Data Innovation Hub

DiS16_Poster_v02 (1) (1)

%d bloggers like this: