Overall Winner of the DengueHack Hackathon: DatAsset

Dengue, the most important arbovirus (arthropod borne virus, viruses such as yellow fever and Zika) is spreading rapidly, and half of the world’s population is at risk of contracting it.

In the last half of a century, the number of people affected by dengue has increased 30-fold (say whaat!?), and it threatens to invade Europe and the United States.

We were happy to stumble upon the DengueHack.org project, so we gathered together an AI kid, a monsieur docteur, and a happy bunch of data wizards to stop this ominous prospect by hacking the heck out of Dengue. In data we trust! Together, as data scientists, we embarked upon this epic 36-hour journey.

Ideas for approaching this project were manifold, and we wanted to really use our team’s diverse set of skills. Eventually, we settled on three goals to work toward: building a predictive incidence model for Sri-Lanka, one of the more heavily impacted countries by Dengue in South-Asia, building a cool visual representation of past and predicted data, and exploring the possibilities of our approach from a public health perspective.

For our model, we experimented with included important national socio-economic and environmental data, with the goal of ultimately applying it to different countries as a next step.

Why building a predictive model is so important: If an outbreak is predicted in a late stage, intervention measures are decreasingly effective. If an epidemic can be detected earlier or even predicted before occurring, the number of cases avoided increases (as can be seen in the image below).

The value of outbreak prediction

Who needs sleep? Even during the few hours of rest we had, our models were running in our brains. With concerted effort, at the end of the thirty-six hours, we’ve arrive at a nice final presentation for you to enjoy. Feedback is most welcome!

Thanks so much to the organizers for this wonderful event, you’ve made us realize that there’s so much common ground and cause between the worlds of data science and public health. Data for good has a bright future!


The DatAsset team, Klaas Michael Pieter Andreas Joren

Call for Speakers – diSummit – March 30, 2017


Hello from everyone at the European Data Innovation Hub. We’re excited to announce our Call for Speakers for the diHub’s 2017 diSummit. This year’s event will be based on the theme of how we can use data for good, and will take place on March 30th at the ING building in the Brussels city center.

We will begin announcing our forty-five selected speakers in mid-January.

You can visit the website for the event here, and if you’re interested in speaking at the event, please fill out our call for speakers form here. Tickets are currently available for sale on Eventbrite.

Hackathon Winner for Best Storytelling: XploData

XploData are the DengueHack.org’s Hackathon prize winner for Best Storytelling. You can view their presentation from the event here.

Our[1] goal at the DengueHack.org hackathon, was to determine the main factors or variables (climate, population, livestock, vegetation…) that influence both Aedes mosquitos and the Dengue virus. We approached this by building two models, one that would be able to take climate and population data to predict mosquito presence in parts of the world where this data is lacking, and the second model would integrate this new data with climate and population data to predict Dengue outbreaks.

In the process of the hackathon, we faced several problems in creating these models, however, we turned our focus to a different second model. Data on Dengue world-wide is inconsistent: we have reports of countries with Dengue, countries where we can confidently say Dengue is not present, and countries where Dengue may or may not be present. By looking at environmental variables that explain the presence or lack thereof of Dengue, we were able to create a model that could estimate the chance of Dengue being present in the countries where Dengue remains unconfirmed.

The problem we faced in building our first model was that the available data for the mosquitoes was only ‘presence’ data, and lacking real absence data. We therefore added an artificial temperature threshold in this model, to create so-called ‘pseudo-absence’ data of mosquitoes. Of course, this resulted in a model that lacked the sophistication we had hoped for. The future is promising although. For instance, satellites and satellite imaging are constantly improving. With improved imaging technology, we could gather more precise data on vegetation, livestock movement and population movements. Bringing this data together in one table with as many points on earth as possible with climate data, population data, livestock data, vegetation data, the amount of standing water, mosquito data, and Dengue data would greatly improve our predictions on Dengue. After all, the impact of being able to predict presence of Dengue cannot be overstated.

To conclude, we want to thank the organizers for giving us the opportunity to join the Hackathon, discover new technologies and meet interesting people. Also special thanks to the members of TeraData who greatly helped us during our preparations and final building of the models.

[1] Our team consisted of members of XploData, i4BI, Janssen Pharma and University of Liege, and are scientists, engineers, physicists, and informatics-specialists. We worked multidisciplinary, combining various skills such as data engineering, data science, data modelling and data visualization. Work hard, play hard (Figure 1).


figure1.pngFigure 1: Hacking is fun!

Evoliris Conference – DEC 7 – Big Data in Brussels.



À vos agendas ! Le mercredi 7 décembre se déroulera la conférence organisée par Evoliris – « Le Big Data à Bruxelles aujourd’hui ! Et demain ? »


 Mot de bienvenue et Introduction – Jean-Pierre Rucci, directeur Evoliris ; Floriane de Kerchove, Directrice Agoria et Présidente d’Evoliris

Présentation du rapport « Le Big Data à Bruxelles aujourd’hui. Et demain ? » Christina Galouzis Evoliris

Intervention en français

La conférence est l’occasion de présenter officiellement le premier numéro du cahier d’Evoliris. Notre premier sujet concerne une tendance technologique qui fait beaucoup parler d’elle : le Big Data. La présentation aborde les différents points développés dans le rapport en termes de politique sociale, d’emploi et de formations. Nous ferons ensemble le point sur les constats mais aussi sur les recommandations et conclusions afin d’initier une réflexion sur l’avenir du Big Data à Bruxelles.

« Big data, mais encore… une approche pragmatique s’impose ! » – Ferdinand Casier – Agoria

Intervention en français

Lors de cette présentation, plusieurs projets seront passés en revue afin de dégager les grandes tendances dans les projets « data » innovants. Nous verrons ensuite comment la région de Bruxelles-Capitale peut contribuer aux développements de son tissu socio-économique en utilisant toutes les ressources disponibles en matière d’analyse de données.

” Comment devenir un expert en science de la donnée en Belgique ?” – Philippe Van Impe – DIHub

Intervention en néerlandais

Cette présentation donne un aperçu de la manière dont un expert peut évoluer en un Data Scientist à part entière. Elle met également en évidence des communautés existantes de la Science des données et du Big Data ainsi que leurs activités pour aider les professionnels de la science des données à avoir l’expérience nécessaire pour occuper rapidement ce type de fonction.

Et conclusion par M. le Ministre Didier Gosuin, Ministre de la Région de Bruxelles-Capitale Emploi, Economie, Formation Professionnelle



 Où ? Au Diamant Building, Salle Baekeland, Bd A. Reyers 80 – 1030 Bruxelles

Quand ? Le mercredi 07 décembre 2016 de 9h à 14h30

Merci de nous faire parvenir votre réponse pour  au plus tard le 28 novembre 2016

+32 (0)2 475 20 00bigdata@evoliris.be



Conferentie Evoliris :  “Big Data in Brussels vandaag. En morgen?

Met de medewerking van de heer Minister Didier Gosuin – 7 december 2016

Diamant Building, Zaal Baekeland, A. Reyerslaan 80 – 1030 Brussel

Mis dat niet! Op woensdag 7 december organiseert Evoliris de conferentie “Big Data in Brussel vandaag. En morgen?


Welkom en inleiding door Jean-Pierre, Directeur Evoliris;  Floriane de Kerchove, Directeur Agoria en Voorzitter van Evoliris 

Voorstelling van het rapport “Big Data in Brussel vandaag. En morgen?” door Christina Galouzis, Evoliris 

Voorstelling in het Frans

Ter gelegenheid van deze conferentie stellen we u “Evoliris zoomt in…” voor. In dit eerste rapport uit de reeks, willen we het hebben over een technologische ontwikkeling waar heel veel over wordt gesproken: Big Data. We benaderen het gegeven vanuit verschillende oogpunten; sociaal, tewerkstelling, en opleidingen. En ten slotte formuleren we ook een aantal conclusies en aanbevelingen over de toekomst van Big Data in Brussel.


 “Big Data… zei u? Een pragmatische aanpak, graag! – Ferdinand Casier, Agoria

Voorstelling in het Frans

Gedurende de presentatie zullen verschillende projecten besproken worden om van daaruit grote lijnen te identificeren die kunnen leiden tot innoverende “data” projecten. We bespreken dan ook hoe het Brussels Hoofdstedelijk Gewest haar steentje kan bijdragen tot socio-economische ontwikkelingen door alle middelen te gebruiken om aan data-analyse te doen.

“Hoe wordt men een Data Science expert in België” – Philippe Van Impe – DIHub

Voorstelling in het Nederlands

In deze presentatie geven we een overzicht van hoe een expert kan evolueren tot een volwaardige data scientist. Er wordt vooral ingegaan op de state of the art van data science en de Big Data community. Verder komen ook activiteiten aan bod waardoor data science professionals de nodige ervaring kunnen opdoen om direct inzetbaar te zijn op de arbeidsmarkt.

Het slotwoord komt van de heer Didier Gosuin, Minister van het Brussels Hoofdstedelijk Gewest inzake Tewerkstelling, Economie, en Professionele Opleiding.


Waar? Diamant Building, Zaal Baekeland, A. Reyerslaan 80 – 1030 Brussel

Wanneer ? woensdag, 7 december 2016 van 9 tot 14.30 uur

Uw antwoord zien wij graag tegemoet voor maandag 28 november

+32 (0)2 475 20 00 – bigdata@evoliris.be

Launching The Dengue Hackathon

On October 11th, the diHub hosted the launch event for the DengueHack.org hackathon, taking place on November 25th and 26th.Each Tuesday leading up to the hackathon, you’re welcome to join our meetings at the diHub to discuss the data we’ve gathered and prepare for the hackathon. You can learn more about our upcoming events on our meetup page.

We were lucky enough to have the following speakers present: Serge Masyn from Janssen (Pharmaceutical company of Johnson and Johnson), Dr. Guillermo Herrera-Taracena from Johnson & Johnson, Anne-Mieke Vandamme, a professor at KU Leuven, Daniel Balog, Stefan Pauwels, and Tom Crauwels from Luciad, Jeroen Dries from Vito, Guy Hendrickx from Avia-GIS, and Pierre Marchand from Teradata.

Annelies Baptist, bootcamp participant and project manager for the hackathon, opened the presentation by explaining the importance of our hackathon and fighting the spread of dengue, and ended by introducing the rest of our speakers.

Copy of IMG_2554.jpg

Serge Masyn, director at Janssen Global Public Health, presented Janssens three goals for the hackathon: to raise awareness about global public health, to raise awareness on dengue, and to try to create new insights into the spread of dengue and predictions into future outbreaks. A year ago, this initiative was only an idea, and Serge was pleased to see how much progress we’ve made toward making it a reality (here is a video from the March 2016 di-Summit, where Serge announced Janssen’s desire to sponsor what would become this very dengue hackathon).

Copy of IMG_2444.jpg

Serge then introduced Dr. Guillermo Herrera-Taracena, the global clinical leader on infectious diseases and global public health for Johnson & Johnson. Guillermo is an engaging and enthusiastic speaker, and he made a point to emphasize the importance of this work to global health at large. After the ebola outbreak, Zika took its place in the public perception as the leading global health concern. Though Dengue is a serious public health burden in it’s own right, Zika, Guillermo claimed, is a cousin, if not a brother, of the Dengue virus, and both diseases are carried by the same species of mosquito. Whatever you do to understand Zika, you’ve done for Dengue, and vice versa. If that isn’t a good enough reason to work on Dengue, he said, he wasn’t sure what is.

Copy of IMG_2487.jpg

Anne-Mieke Vandamme, a professor at KU Leuven and head of the Laboratory of Clinical and Epidemiological Virology called in from Lisbon to give a talk about mapping epidemics. Using phylogenetic trees, scientists can reconstruct the origin and development of a virus outbreak. After her presentation, she introduced Daniel Balog, a senior software engineer at Luciad who she had previously collaborated with. Daniel gave a demo using Luciad software showing an animation of the Ebola outbreak in Sierra Leone, Liberia, and Guinea.

Copy of IMG_2628.jpg

Then, Stefan Pauwels and Tom Crauwels gave a demo of the software products from Luciad. Though most of their software is geared toward military and aviation use, the technology that makes visualizing position updates every second for millions of points possible has applications beyond the scopes of those industries. For the hackathon, Luciad will be offering the free use of their software, and will also provide a training workshop in preparation for the event.

Copy of IMG_2687.jpgTom Crauwels

Stefan Pauwels

Jeroen Dries from Vito, then discussed how data satellite pictures can be used for the hackathon to fight dengue. Vito operates a Belgian satellite to take daily images to create a time series, combining these images to create a global time series analysis of how an area has been evolving. They’ve built an application focused on these time series that includes meteorological data from each country, which is of particular importance for the hackathon. For this event, Vito will provide us with a cloud platform that has access to a Hadoop cluster for processing their satellite data.

IMG_2731 (1).jpg

Guy Hendrickx from Avia-GIS presented their research on Dengue, where they mapped the Tiger mosquito. In the 90’s, Guy was one of the first people to use satellite data to model tsetse fly distribution and the diseases they transmit. In 2010 for the European Center for Disease Control, Avia-GIS began developing a database for the network of mosquitos, ticks, and sandflies all over Europe and producing maps of these different species every three months. Avia-GIS are also generously providing the free use of these databases for the hackathon.

Copy of IMG_2775.jpg

Finally, Pierre Marchand presented from Teradata. Put in the unfortunate position of being the last barrier between a room full of hungry people and their pizza, he made his presentation quick. Teradata will be providing the free use of their Aster platform for storing and modeling the data, and will be providing training on using this platform in the coming weeks leading up to the hackathon.

Copy of IMG_2791.jpg

And, at the end, there was pizza, beer, and networking.

Copy of IMG_2822.jpg

Again, we’d like to extend an enormous thank you to the speakers at the event and for the previous and ongoing support provided by the organizations involved. You can view pictures of the event on our facebook page and videos of the presentations on our youtube channel.

OCT20 – FREE Meetup about Process Mining @VUBrussel

process mining.png

18:30 Update on the activities of the Data Science Community

Confirmed speakers:

19:00 Jochen De Weerdt (KU Leuven) : Process mining – Making data science actionable

19:30 Mieke Jans (UHasselt): The art of building valuable event logs out of relational databases

20:00 Pieter Dyserinck (ING) & Pieter Van Bouwel (Python Predictions): Process mining, the road to a superior customer experience

20:30 Open discussion and flash presentations. Startups welcome.

20:40 Networking and drinks @ ‘t Complex situated above the swimming pool

Reserve your seat here

Data Science Bootcamp: Week 2

My name’s Alexander Chituc, and I’ll be your foreign correspondent in Brussels, regularly reporting on the diHub and the data science community here in Belgium. I’m an American, I studied philosophy at Yale, and I’m one of the seventeen boot-campers for the di-Academy.

We started the second week of the Data Science bootcamp developing some more practical skills. The first day was devoted to learning about building predictive models using R with Nele Verbiest, a Senior Analyst from Python Predictions. The second day, we worked with Xander Steenbrugge, a data analyst from Datatonic, learning about Data Visualization using Tableau Software.

Day 1: Predictive modeling

Nele told us to think of predictive modeling as the use of all available information to predict future events to optimize decision making. Just making predictions isn’t enough, she said, if there’s no action to take.

The analogy used throughout the training was that developing a predictive model was like cooking. We can think of cooking for a restaurant as having five general steps: take the order, prepare the ingredients, determine the proportion of ingredients to use and how to cook them, taste and approve the dish, and finally, serve the dish and check in with the customer. We can translate this into five analogous steps for preparing a predictive model: project definition, data preparation, model building, model validation, and model usage.


We were given a lab in predictive modeling in R, providing us with hands-on experience with the methodology and techniques of predictive modeling. A sample dataset was provided, and the lab walked us step by step through the process of developing a model to detect the predictors that determine the likelihood of whether a customer will churn (for those outside the biz, a churn rate is the rate at which individuals leave a community over time, in this case that means canceling a subscription with a telecom provider). This lab took us through all five steps of the process, and along the way we cleaned data, replaced any outliers, went over the basics of model building, discussed the danger of over-fitting a model (the analogy here was recording a concert — you want to record the music, not the sound of the audience, conductor’s baton, or pages turning) and how to simplify a model to prevent this. We went over decision trees, linear regression, logistic regression, variable selection, and how to evaluate your model.


There’s obviously a lot more detail I could get into here, but if I had to write about all of it, I’d never get the chance to write about day two.

Day 2: Data Visualization using Tableau Software

The second day, we immediately jumped into how to use Tableau software. Considering just how much it’s possible to do with this program, I was surprised by how intuitive and and easy to use it was. Managing data is extremely simple, and to create a graph you simply set the parameters, select the graph type, assign data to the columns and rows, set any filters you might want, and choose which data you want to visually represent by color, size, or label.

Xander walked us through how to create the dashboard below, demonstrating the sales of a sample superstore geographically, showing which quarters and departments had the most sales, as well as the average shipping delay for each category and subcategory. tableau dashboard.jpg

After lunch, we were given a dataset and an image of a desktop, and asked to recreate it ourselves in Tableau. After learning the basics with Xander, it was nice to be tossed into the pool to get some real practice swimming:

dashboard 2.png

If you’re interested in seeing more of what Tableau software is capable of, here’s an example of an interactive graph from their website, where you can explore Global Nuclear Energy Use. You can explore the entire gallery here.

Thanks again to Nele Verbiest and Xander Steenbrugge for being such great teachers, and expect a post on week 3 soon.

Bayes in Action

During my coursework in Philosophy, we devoted a lot of time to discussing Bayes’ theorem. Two fields find it particularly important, the Philosophy of Science and Epistemology, or the study of what knowledge is. It’s considered a pillar for rational thinking and increasing our understanding of the world, and it’s fundamental for evaluating claims given the evidence we have. Bayes’ theorem looks like this:

codecogseqnBayes’ Theorem

To put it simply, Bayes’ theorem describes the probability of an hypothesis or event based on relevant conditions or evidence. This equation might look complex, but it’s actually quite easy to understand after a little bit of translation. ‘P’ stands for ‘the probability that’, ‘|’ is a symbol that means something like ‘given that’, ‘A’ stands for a hypothesis, and ‘B’ stands for an event or evidence that might impact the likelihood of the hypothesis. When we understand it this way, the equation reads: the probability of a hypothesis given some evidence is equal to the probability of that evidence given the hypothesis, multiplied by the probability of hypothesis, and all of this is divided by the probability of that evidence.

An example can clear things up. Let’s say you check WebMD because you have a nasty cough. You see that having a nasty cough is a symptom of cancer, and that the likelihood of having this cough if you have cancer is very, very high. If you had cancer, this nasty cough is exactly what you would have expect to see, so it must be pretty probable that you have cancer, and like most people who visit WebMD, you walk away convinced that you’re dying. Bayes’ theorem helps us see why thinking this way is a mistake.

Let’s fill in the equation with some numbers we made up. Let’s assume the probability that you have the cough given that you have cancer is very high: 95%. But, you’re a young and healthy person, so at your age, only one in a hundred thousand people get this kind of cancer. And again, lets assume having a nasty cough is pretty common, it’s cold season after all, so one in a hundred people have a nasty cough. Filling it in, we get this:


So, if we do the math, we come up with your probability of having cancer given that you have a nasty cough: 0.00095, a pretty small chance.

The application of Bayes’ theorem in the field of medicine is extremely useful, especially when considering the accuracy of tests and the likelihood of false positives or false negatives, and there are countless other practical applications for it.

Bayes’ theorem is quite simple, but it’s application to the field of statistics, or Bayesian Statistics, is quite complex, and it’s an important part of how Google can filter search results for you, how your email can detect spam, and how Nate Silver could accurately predict the 2008 presidential election in the United States.

A PhD in Astronomy, Romke Bontekoe typically offers his course on Bayes in Action in Amsterdam, but on October 20th, he’ll be offering his training here at the European Data Innovation Hub. The training is geared towards managers and researchers who want to understand Bayesian Statistics and its application, but the course is open to anybody interested.

If you’d like to learn more about Bayes’ theorem, you can look at this video that I animated for Wireless Philosophy, and if you want to learn more about Bayesian Statistics and its application, register for the training on the di-academy’s website.