7th Data Science Leuven Meetup – 29/5

datascience Leuven.jpg

Details

Well, well, well, I bet some of you didn’t expect a DSL meetup in May anymore. Ha! That’s where some of you were wrong! We are sorry that it took so long but we can finally announce a meetup on the 29th of May. We have one confirmed speaker already. More to follow!

7pm: Doors

7.30pm:
– Mike Friendman (Solvay) – The value of HR Analytics
– Maarten Weyn (Professor UA) – Send Time optimization in smart city sensors using distributed algorithms
– Speaker 3

9pm: Beers (STUKcafé, Naamsestraat 96, 3000 Leuven)

 

Register Here

 

 

May 24th – Data Science Liège – MEETUP #4

Thursday May 24th 2018, as of 6:30pm

HEC Liège, Rue Louvrex, 14. Liège (entrée du parking Rue St Gilles)

Email: datascience@uliege.be

Version française

REGISTRATION

Follow us on Twitter !

Program

6.30 pm: Doors opening
7.00 pm: Short talks

L’alchimie numérique ou comment placer le datamining au cœur de la stratégie de son entreprise by David Eskenazi (Expert R&D – Prayon)

La Direction du groupe Prayon a fait, depuis deux ans, le pari d’exploiter la richesse contenue dans ses données. Les techniques d’analyse de l’information et d’apprentissage de modèle de données disponibles aujourd’hui permettent une révolution profonde de la manière dont les données impactent le développement d’une entreprise. L’intégration et l’exploitation des capteurs et des systèmes de stockages de données couplées à l’utilisation d’algorithmes permettant des analyses très fines et prédictives supportent des approches jusque-là peu explorées dans l’industrie chimique minérale.


Opisense platform for data science industrialisation: Use case with smart meters in the energy sector
 by Loïc Bar (CEO – Opinum)

Opinum à développé Opisense, une plateforme analytique destinée à traiter les données issues de l’IoT (Internet des objets) dans l’énergie et l’environnement. La plateforme permet d’automatiser des traitements algorithmiques dans le cadre des données issues de compteurs intelligents (smart meter) ou de système de production décentralisée d’énergie. Elle est utilisée par des sociétés comme EDF, Colruyt, Total, Lampiris, etc.

An introduction to the STRANGER package by Eric Lecoutre (Data Science Advisor – WeLoveDataScience!)

We introduce you the STRANGER package, for: Simple Toolkit in R for Anomaly Get Explain and Report. Stranger is to anomaly detection what ‘caret’ is for supervised modeling: a wrapper package providing a common way to use a set of methods (from other packages) and extending usage with some utilities such as stacking and visualizations methods. This talk won’t present the various anomaly detection methods (a workshop is in preparation) but will illustrate the typical analysis workflow with stranger on a concrete case.

8.00 pm: Panel discussion & closing remarks
8.30 pm: Beers & Networking
9.30 pm: Closing

Meetup – August 1st – Q&A on GDPR implementation

 

gdpr_orig

Data Science Community and Legal Hackers join forces to untangle the GDPR mystery and to propose a workable and easy to use framework to allow each datascientist to stay inside the legal boundaries.

At first sight, it may not look so, but lawyers and data scientist have things in common. One of those things is the struggle with the implementation of the General Data Protection Regulation (“GDRP”). Each looks at it from a different perspective. Often they have the same goal: support the business in handling and using (personal) data “responsably”, like a “good head of the family”, in line with the state-of-the-art and the law.

This is one of the conclusion we made from the “data science and ethics” event in June.

So let’s try something. Let’s join forces and exchange best practices in a “questions and answers” session on GDPR implementation.

The idea

The idea is the enter the meeting with questions… and to come out of it with (some) answers.

As for the questions, please, do one on of the following:

  • raise them in the comment section
  • if you want, prepare a short presentation to give some context or an overview
  • keep a little notebook for them and bring it to the meetup

The questions can be legal, technical, organisational, practical,… We hope to gather all types of skills in the room to find the answers. Note that this also means that you don’t per se have to (already) be involved in the implementation at your company at present. The questions of people a bit further from a practical implementation may shed a fresh light on things.

As for the answers. They could be:

  • a clear view on the theory,
  • potential tooling,
  • templates or (anonymised) examples,

The approach

ASSEMBLY – Depending on the size of the group of attendees and the interests, we could go over all the questions in a single group or break up in sub-groups. In the latter case, we would assemble at the beginning (for an introduction) and at the end of the session (for a short report on each sub-group).

CONFIDENTIALITY – For people to open up in the meeting, we emphasize the commitment to confidentiality of all the attendees. Everything shared in the context of the event (prior to it, during and afterwards) should be kept confidential. So be careful when tweeting, taking photographs, making (video) recordings,… The practical information received can at best only be used anonymously (without reference to the source) and obfuscated (commingled amongst other sources). If you are not sure, explicitly ask the source if and how to use the info or material.

INFORMALITY – The approach is rather informal. This also means that any answers etc. are not advice and just a best effort nudge in a (good) direction. We don’t want anybody to worry about being called on for being open or offering an opinion during the meeting.

The joint forces

This organisation is a joint effort of (at least) two communities in Brussels:

 

What if it works?

If this seems to work out fine, we are considering organising this once every month or every two weeks.

OCT20 – FREE Meetup about Process Mining @VUBrussel

process mining.png

18:30 Update on the activities of the Data Science Community

Confirmed speakers:

19:00 Jochen De Weerdt (KU Leuven) : Process mining – Making data science actionable

19:30 Mieke Jans (UHasselt): The art of building valuable event logs out of relational databases

20:00 Pieter Dyserinck (ING) & Pieter Van Bouwel (Python Predictions): Process mining, the road to a superior customer experience

20:30 Open discussion and flash presentations. Startups welcome.

20:40 Networking and drinks @ ‘t Complex situated above the swimming pool

Reserve your seat here

Analytics: Lessons Learned from Winston Churchill

chrurchill

I had the pleasure to be invited for lunch by Prof. Baessens earlier this week and we talked about a next meetup subject that could be ‘War and Analytics’. As you might know Bart  is a WWI fanatic and he has already written a nice article on the subject called ‘Analytics: Lessons Learned from Winston Churchill’

here is the article—

Nicolas Glady’s Activities

Activities Overview‎ > ‎Online articles‎ > ‎ Analytics: Lessons Learned from Winston Churchill

Analytics has been around for quite some time now.  Even during World War II, it proved critical for the Allied victory. Some famous examples of allied analytical activities include the decoding of the enigma code, which effectively removed the danger of submarine warfare, and the 3D reconstruction of 2D images shot by gunless Spitfires, which helped Intelligence at RAF Medmenham eliminate the danger of the V1 and V2 and support operation Overlord. Many of the analytical lessons learned at that time are now more relevant than ever, in particular those provided by one of the great victors of WWII, then Prime Minister, Sir Winston Churchill.

The phrase “I only believe in statistics that I doctored myself” is often attributed to him. However, while its wit is certainly typical of the Greatest Briton, it was probably a Nazi Propaganda invention. Even so, can Churchill still teach us something about statistical analyses and Analytics?

 

A good analytical model should satisfy several requirements depending upon the application area and follow a certain process. The CRISP-DM, a leading methodology to conduct data-driven analysis, proposes a structured approach: understand the business, understand the data, prepare the data, design a model, evaluate it, and deploy the solution. The wisdom of the 1953 Nobel Prize for literature can help us better understand this process.

Have an actionable approach: aim at solving a real business issue

Any analytics project should start with a business problem, and then provide a solution. Indeed, Analytics is not a purely technical, statistical or computational exercise, since any analytical model needs to be actionable. For example, a model can allow us to predict future problems like credit card fraud or customer churn rate. Because managers are decision-makers, as are politicians, they need “the ability to foretell what is going to happen tomorrow, next week, next month, and next year… And to have the ability afterwards to explain why it didn’t happen.” In other words, even when the model fails to predict what really happened, its ability to explain the process in an intelligible way is still crucial.

In order to be relevant for businesses, the parties concerned need first to define and qualify a problem before analysis can effectively find a solution. For example, trying to predict what will happen in 10 years or more makes little sense from a practical, day-to-day business perspective: “It is a mistake to look too far ahead. Only one link in the chain of destiny can be handled at a time.”  Understandably, many analytical models in use in the industry have prediction horizons spanning no further than 2-3 years.

Understand the data you have at your disposal

There is a fairly large gap between data and comprehension. Churchill went so far as to argue that “true genius resides in the capacity for evaluation of uncertain, hazardous, and conflicting information.”  Indeed, Big Data is complex and is not a quick-fix solution for most business problems. In fact, it takes time to work through and the big picture might even seem less clear at first. It is the role of the Business Analytics expert to really understand the data and know what sources and variables to select.

Prepare the data

Once a complete overview of the available data has been drafted, the analyst will start preparing the tables for modelling by consolidating different sources, selecting the relevant variables and cleaning the data sets. This is usually a very time-consuming and tedious task, but needs to be done: “If you’re going through hell, keep going.”

Never forget to consider as much past historical information as you can. Typically, when trying to predict future events, using past transactional data is very relevant as most of the predictive power comes from this type of information. “The longer you can look back, the farther you can look forward.”

read more here

Join us – Silicon Valley Inspiration Tour – March 26, 2016

 

20110531_014212_OBRIEN-valleys-052911-e1307311368351

Visit the Silicon Valley ecosystem:

Just after the Data Innovation Summit that will be held in Brussels on March 23, the Data Science Community is organising a Low Budget innovation tour in Silicon Valley.

Silicon Valley has a variety of business organizations and institutions that create a business environment that has proved to be highly conducive to the successful creation of startup firms, disruptive business models, and leadership in a variety of high-tech areas.

With our team we want to get a first hand impression on what are the key components of Silicon Valley, how do they work, and how do they fit together?

Key Characteristics of the Silicon Valley Ecosystem that we will focus on:

  • Dual ecosystem of large firms and startups
  • High financial returns for successful entrepreneurs and startups’ early employees
  • Global top-level human resources for all stages of startups
  • Business infrastructure (law firms, accounting firms, mentors, etc.)
  • Venture capital – most competitive market
  • Globally top class universities (Stanford, UC Berkeley, UCSF)
  • Human resource clusters anchored around top universities
  • Extensive government role in shaping technological trajectories and basic science
  • Highly competitive industries, balance between “open innovation” and secret protection
  • Balance of “open innovation” and intellectual property protection
  • “Technology Pump” of top human resources from all over the world
  • High labor mobility at all levels of management and talent
  • Culture of accepting failures (effective evaluation and monitoring)

We plan to arrive on Saturday, we will rent a house for our community members in San Francisco from Saturday 26/3 to Tuesday 29 then we will move to a house in San Jose on Wednesday.

Agenda of our Visit:

We have lined up a few companies that we want to meet. If you can help us and facilitate the access to an interesting company please let us know.

We have already received some nice invitations from:

We plan to attend a few meetups too, like to one from SF DataScience on Monday about  ‘Kafka and Data Science’ , …

On arrival the ‘Sons of Analytics’ will rent a bike to attend the LAUGHLIN RIVER RUN .

And of course we will travel to San-Jose to attend the Strata Hadoop event. Please always use our discount code at any O’Reilly activity: our 20% discount code UGBDSC

Strata + Hadoop World is the leading event on how big data and ubiquitous, real-time computing is shaping the course of business and society. It brings together the world’s best data scientists and business leaders to share hard-won knowledge and innovations in technology and strategy.

Check out the impressive program and make plans to join Strata + Hadoop World in San Jose March 28-31, 2016.

Save 20% on most passes with discount code UGBDSC.

“The best opportunity to learn about the technologies that are transforming big data and data science.”

Costs

We will travel on a budget and share cars and houses, make sure you take a sleeping bag. Our first estimation is that you should be able to do this for less than 3000.

Flight 800
Accomodation 6 300
Food and drinks 600
Car rental 120
Peto cash 200
Ticket Strata 1000
3020

Coordinator

  • scouting to find interesting meetings: all community members + Philippe
  • practical organisation: Laurence
  • if you are interested just put your name on the list

 

New Years Drink @VUB – Januari 21 18:30

meetup marketing

 

Dear Friends,

You are kindly invited to our New Year’s drink in the Opinio @VUB after our first meetup of the year focusing on Climate change and what the data science community could do.

Join us next Thursday 21/1 at the VUB for a #Data4Good meetup organised by the Brussels Data Science Community about Climate and #datascience with Kris Peeters, Philippe Van Impe, Jeff R Noel, Matthew Porcelli, Christophe Vercarre, Maarten Lambrechts.

Register Here

Please Please engage with us often

Agenda:

18:30 Update on the activities of the HUB and the latest news on our community

19:00 The weather Company shows their dataset and how it can be used.

• Part 1: Why Businesses Can Create Better Outcomes as the Climate Changes

 Description: Climate change is causing more extreme weather.  Businesses and governments must prepare for extreme weather.  There is no reason weather is an excuse for poor outcomes (business and public), better decisions can be made.

 Speaker and biography:  Jeff Noel, The Weather Company, Business Development Director – IBM Strategic Partnership
— Topics covered by the first topic:    == Big Data and IoT:  Why weather is a Big Deal to IBM and you    == Leveraging data to make decisions that make a difference: Examples    == Preparing for the Future

• Part 2: Weather Data, a true IOT Story

 Description: The Weather Company collects data from many sources and uses this data to improve understanding of current conditions and forecasts globally.  The data is extensive and easily leveraged to deliver new solutions.

 Speaker and biography:  Matthew Porcelli, The Weather Company, Senior Sales Engineer – IBM Partnership     —

— Topics covered by the second topic:    == Explore why weather data is a key part of IOT:  Types and structure    == Enterprise Global data from APIs:  API structure and using the data    == There is so much more!

19:45 Christophe Vercarre is active in different sustainability projects from Futureproofed.

Futureproofed is the leading sustainability consultancy in Belgium since 14 years, addressing climate change with business models, objectively measuring and mapping organisations’ impact, developing best possible business solutions to minimise environmental impact and market testing them with customers for maximum revenue impact. Serge de Gheldere and his international team of engineers and economists develop tools and strategies for decision-makers in a climate-challenged and competitive world.

20:05 The stories of  Maarten Lambrechts can be found in De TIJD, he is a Data  visualizer and storyteller and has prepared a story on “How Belgium is heating up, the making of”

20:30 Kris Peeters is the founder of Dataminded, a company specialized in Datascience and BigData technology. His talk will focus on “how can we help climate scientists with modern data technology?” – Kris build a ‘open source’ demo using 4 years NASA satellite pictures of the world showing how CO2 is spread over the countries.

Inline image 1

21:00 New Years Drink at the Opinio

 

 

 

January Meetup – Climate Change – Call for papers

How can #datascience help fight earth warming?

Let’s do a data4good meetup in January and tackle the climate change issues and see how the biggest economic and social opportunity of all time could be facilitated by modern datasciences.

The subject of the January meetup will be in line with today’s news. In a series of presentations we will see what #datascience experts can bring to the table to work on these devastating issues.

Time for action ! Time for you to take actions ! This meetup will be all about your actions, your presentation, your ideas, the floor will be all yours. -> reserve your speaking slot now.

 

Call for paper:

 

Register for free:

How can #datascience help fight earth warming?

Thursday, Jan 21, 2016, 6:30 PM

VUB – Aula QB
Pleinlaan 2B – 1050 Brussels, BE

1 Business & Data Science pro’s Attending

Time for action ! Time for you to take actions ! This meetup will be all about your actions, your presentation, your ideas, the floor will be all yours.Respond to the call for presentations, and reserve your speaking slot,  please send an email to [masked].The Biggest Economic And Social Opportunity Of All Time could be facilitated by m…

Check out this Meetup →

 

List of Open Data Portals from Around the World by DataPortals.org

top

We will have a special meetup about Using Open data to promote Data Innovation on Thursday, November 26, 2015 @VUB. Here is a Comprehensive List of Open Data Portals from Around the World.

DataPortals.org is the most comprehensive list of open data portals in the world. It is curated by a group of leading open data experts from around the world – including representatives from local, regional and national governments, international organisations such as the World Bank, and numerous NGOs.

The alpha version of DataPortals.org was launched at OKCon 2011 in Berlin. We have plenty of useful improvements and features in the pipeline, which will be launched over the coming months.

If you have an idea for a feature you think we should add, please let us know via the discussion forum.

open data

The Open Definition sets out principles that define “openness” in relation to data and content.

It makes precise the meaning of “open” in the terms “open data” and “open content” and thereby ensures quality and encourages compatibility between different pools of open material.

It can be summed up in the statement that:

“Open means anyone can freely access, use, modify, and share for any purpose(subject, at most, to requirements that preserve provenance and openness).”

Put most succinctly:

“Open data and content can be freely used, modified, and shared by anyone forany purpose

Meetup – OCT22 – Brussels – A journey through advanced analytics in insurance

annoucement

Data science is going to drastically change the insurance industry. Through the analysis and exploitation of massive data, the possibilities opened up by Big Data will have a long-term impact on the market. Data science will allow insurance companies to offer better services individually adjusted to the uses and needs of its beneficiaries.

Agenda:

18:30 monthly update by Philippe Van Impe

19:00  « Geographical pricing in Motor Insurance »
by Xavier Maréchal from Reacfin.

19:25 « A journey through advanced analytics in insurance »
by Colin Molter from AXA

19:50 « Data-drive valuation as an alternative for a grid @ fire insurance  »
by Michaël Peeters from Mimec

20:15 « Overview of the activities of the Fintech community »
by Eric Rodriguez &  Toon Vanagt

20:40 « series of open 5 minutes presentation from anyone with an opinion or interesting project »

  • Noisy Channels by Filip Deryckere
  • Brainvolve by Steven De Blieck
  • transformy.io by Tomas Broodcoorens

21:10 Netorking at the Opinio

Register for free:

The Value of Advanced Analytics in the Insurance business

Thursday, Oct 22, 2015, 6:30 PM

D.0.07
VUB Pleinlaan 2B – 1050 Brussels, BE

123 Business & Data Science pro’s Attending

Data science is going to drastically change the insurance industry. Through the analysis and exploitation of massive data, the possibilities opened up by Big Data will have a long-term impact on the market. Data science will allow insurance companies to offer better services individually adjusted to the uses and needs of its beneficiaries.Agenda:…

Check out this Meetup →