Launching The Dengue Hackathon

On October 11th, the diHub hosted the launch event for the DengueHack.org hackathon, taking place on November 25th and 26th.Each Tuesday leading up to the hackathon, you’re welcome to join our meetings at the diHub to discuss the data we’ve gathered and prepare for the hackathon. You can learn more about our upcoming events on our meetup page.

We were lucky enough to have the following speakers present: Serge Masyn from Janssen (Pharmaceutical company of Johnson and Johnson), Dr. Guillermo Herrera-Taracena from Johnson & Johnson, Anne-Mieke Vandamme, a professor at KU Leuven, Daniel Balog, Stefan Pauwels, and Tom Crauwels from Luciad, Jeroen Dries from Vito, Guy Hendrickx from Avia-GIS, and Pierre Marchand from Teradata.

Annelies Baptist, bootcamp participant and project manager for the hackathon, opened the presentation by explaining the importance of our hackathon and fighting the spread of dengue, and ended by introducing the rest of our speakers.

Copy of IMG_2554.jpg

Serge Masyn, director at Janssen Global Public Health, presented Janssens three goals for the hackathon: to raise awareness about global public health, to raise awareness on dengue, and to try to create new insights into the spread of dengue and predictions into future outbreaks. A year ago, this initiative was only an idea, and Serge was pleased to see how much progress we’ve made toward making it a reality (here is a video from the March 2016 di-Summit, where Serge announced Janssen’s desire to sponsor what would become this very dengue hackathon).

Copy of IMG_2444.jpg

Serge then introduced Dr. Guillermo Herrera-Taracena, the global clinical leader on infectious diseases and global public health for Johnson & Johnson. Guillermo is an engaging and enthusiastic speaker, and he made a point to emphasize the importance of this work to global health at large. After the ebola outbreak, Zika took its place in the public perception as the leading global health concern. Though Dengue is a serious public health burden in it’s own right, Zika, Guillermo claimed, is a cousin, if not a brother, of the Dengue virus, and both diseases are carried by the same species of mosquito. Whatever you do to understand Zika, you’ve done for Dengue, and vice versa. If that isn’t a good enough reason to work on Dengue, he said, he wasn’t sure what is.

Copy of IMG_2487.jpg

Anne-Mieke Vandamme, a professor at KU Leuven and head of the Laboratory of Clinical and Epidemiological Virology called in from Lisbon to give a talk about mapping epidemics. Using phylogenetic trees, scientists can reconstruct the origin and development of a virus outbreak. After her presentation, she introduced Daniel Balog, a senior software engineer at Luciad who she had previously collaborated with. Daniel gave a demo using Luciad software showing an animation of the Ebola outbreak in Sierra Leone, Liberia, and Guinea.

Copy of IMG_2628.jpg

Then, Stefan Pauwels and Tom Crauwels gave a demo of the software products from Luciad. Though most of their software is geared toward military and aviation use, the technology that makes visualizing position updates every second for millions of points possible has applications beyond the scopes of those industries. For the hackathon, Luciad will be offering the free use of their software, and will also provide a training workshop in preparation for the event.

Copy of IMG_2687.jpgTom Crauwels
copy-of-img_2652

Stefan Pauwels

Jeroen Dries from Vito, then discussed how data satellite pictures can be used for the hackathon to fight dengue. Vito operates a Belgian satellite to take daily images to create a time series, combining these images to create a global time series analysis of how an area has been evolving. They’ve built an application focused on these time series that includes meteorological data from each country, which is of particular importance for the hackathon. For this event, Vito will provide us with a cloud platform that has access to a Hadoop cluster for processing their satellite data.

IMG_2731 (1).jpg

Guy Hendrickx from Avia-GIS presented their research on Dengue, where they mapped the Tiger mosquito. In the 90’s, Guy was one of the first people to use satellite data to model tsetse fly distribution and the diseases they transmit. In 2010 for the European Center for Disease Control, Avia-GIS began developing a database for the network of mosquitos, ticks, and sandflies all over Europe and producing maps of these different species every three months. Avia-GIS are also generously providing the free use of these databases for the hackathon.

Copy of IMG_2775.jpg

Finally, Pierre Marchand presented from Teradata. Put in the unfortunate position of being the last barrier between a room full of hungry people and their pizza, he made his presentation quick. Teradata will be providing the free use of their Aster platform for storing and modeling the data, and will be providing training on using this platform in the coming weeks leading up to the hackathon.

Copy of IMG_2791.jpg

And, at the end, there was pizza, beer, and networking.

Copy of IMG_2822.jpg

Again, we’d like to extend an enormous thank you to the speakers at the event and for the previous and ongoing support provided by the organizations involved. You can view pictures of the event on our facebook page and videos of the presentations on our youtube channel.

OCT20 – FREE Meetup about Process Mining @VUBrussel

process mining.png

18:30 Update on the activities of the Data Science Community

Confirmed speakers:

19:00 Jochen De Weerdt (KU Leuven) : Process mining – Making data science actionable

19:30 Mieke Jans (UHasselt): The art of building valuable event logs out of relational databases

20:00 Pieter Dyserinck (ING) & Pieter Van Bouwel (Python Predictions): Process mining, the road to a superior customer experience

20:30 Open discussion and flash presentations. Startups welcome.

20:40 Networking and drinks @ ‘t Complex situated above the swimming pool

Reserve your seat here

Data Science Bootcamp: Week 2

My name’s Alexander Chituc, and I’ll be your foreign correspondent in Brussels, regularly reporting on the diHub and the data science community here in Belgium. I’m an American, I studied philosophy at Yale, and I’m one of the seventeen boot-campers for the di-Academy.

We started the second week of the Data Science bootcamp developing some more practical skills. The first day was devoted to learning about building predictive models using R with Nele Verbiest, a Senior Analyst from Python Predictions. The second day, we worked with Xander Steenbrugge, a data analyst from Datatonic, learning about Data Visualization using Tableau Software.

Day 1: Predictive modeling

Nele told us to think of predictive modeling as the use of all available information to predict future events to optimize decision making. Just making predictions isn’t enough, she said, if there’s no action to take.

The analogy used throughout the training was that developing a predictive model was like cooking. We can think of cooking for a restaurant as having five general steps: take the order, prepare the ingredients, determine the proportion of ingredients to use and how to cook them, taste and approve the dish, and finally, serve the dish and check in with the customer. We can translate this into five analogous steps for preparing a predictive model: project definition, data preparation, model building, model validation, and model usage.

predictive-analysis-cooking

We were given a lab in predictive modeling in R, providing us with hands-on experience with the methodology and techniques of predictive modeling. A sample dataset was provided, and the lab walked us step by step through the process of developing a model to detect the predictors that determine the likelihood of whether a customer will churn (for those outside the biz, a churn rate is the rate at which individuals leave a community over time, in this case that means canceling a subscription with a telecom provider). This lab took us through all five steps of the process, and along the way we cleaned data, replaced any outliers, went over the basics of model building, discussed the danger of over-fitting a model (the analogy here was recording a concert — you want to record the music, not the sound of the audience, conductor’s baton, or pages turning) and how to simplify a model to prevent this. We went over decision trees, linear regression, logistic regression, variable selection, and how to evaluate your model.

 

There’s obviously a lot more detail I could get into here, but if I had to write about all of it, I’d never get the chance to write about day two.

Day 2: Data Visualization using Tableau Software

The second day, we immediately jumped into how to use Tableau software. Considering just how much it’s possible to do with this program, I was surprised by how intuitive and and easy to use it was. Managing data is extremely simple, and to create a graph you simply set the parameters, select the graph type, assign data to the columns and rows, set any filters you might want, and choose which data you want to visually represent by color, size, or label.

Xander walked us through how to create the dashboard below, demonstrating the sales of a sample superstore geographically, showing which quarters and departments had the most sales, as well as the average shipping delay for each category and subcategory. tableau dashboard.jpg

After lunch, we were given a dataset and an image of a desktop, and asked to recreate it ourselves in Tableau. After learning the basics with Xander, it was nice to be tossed into the pool to get some real practice swimming:

dashboard 2.png

If you’re interested in seeing more of what Tableau software is capable of, here’s an example of an interactive graph from their website, where you can explore Global Nuclear Energy Use. You can explore the entire gallery here.

Thanks again to Nele Verbiest and Xander Steenbrugge for being such great teachers, and expect a post on week 3 soon.

Bayes in Action

During my coursework in Philosophy, we devoted a lot of time to discussing Bayes’ theorem. Two fields find it particularly important, the Philosophy of Science and Epistemology, or the study of what knowledge is. It’s considered a pillar for rational thinking and increasing our understanding of the world, and it’s fundamental for evaluating claims given the evidence we have. Bayes’ theorem looks like this:

codecogseqnBayes’ Theorem

To put it simply, Bayes’ theorem describes the probability of an hypothesis or event based on relevant conditions or evidence. This equation might look complex, but it’s actually quite easy to understand after a little bit of translation. ‘P’ stands for ‘the probability that’, ‘|’ is a symbol that means something like ‘given that’, ‘A’ stands for a hypothesis, and ‘B’ stands for an event or evidence that might impact the likelihood of the hypothesis. When we understand it this way, the equation reads: the probability of a hypothesis given some evidence is equal to the probability of that evidence given the hypothesis, multiplied by the probability of hypothesis, and all of this is divided by the probability of that evidence.

An example can clear things up. Let’s say you check WebMD because you have a nasty cough. You see that having a nasty cough is a symptom of cancer, and that the likelihood of having this cough if you have cancer is very, very high. If you had cancer, this nasty cough is exactly what you would have expect to see, so it must be pretty probable that you have cancer, and like most people who visit WebMD, you walk away convinced that you’re dying. Bayes’ theorem helps us see why thinking this way is a mistake.

Let’s fill in the equation with some numbers we made up. Let’s assume the probability that you have the cough given that you have cancer is very high: 95%. But, you’re a young and healthy person, so at your age, only one in a hundred thousand people get this kind of cancer. And again, lets assume having a nasty cough is pretty common, it’s cold season after all, so one in a hundred people have a nasty cough. Filling it in, we get this:

codecogseqn-1

So, if we do the math, we come up with your probability of having cancer given that you have a nasty cough: 0.00095, a pretty small chance.

The application of Bayes’ theorem in the field of medicine is extremely useful, especially when considering the accuracy of tests and the likelihood of false positives or false negatives, and there are countless other practical applications for it.

Bayes’ theorem is quite simple, but it’s application to the field of statistics, or Bayesian Statistics, is quite complex, and it’s an important part of how Google can filter search results for you, how your email can detect spam, and how Nate Silver could accurately predict the 2008 presidential election in the United States.

A PhD in Astronomy, Romke Bontekoe typically offers his course on Bayes in Action in Amsterdam, but on October 20th, he’ll be offering his training here at the European Data Innovation Hub. The training is geared towards managers and researchers who want to understand Bayesian Statistics and its application, but the course is open to anybody interested.

If you’d like to learn more about Bayes’ theorem, you can look at this video that I animated for Wireless Philosophy, and if you want to learn more about Bayesian Statistics and its application, register for the training on the di-academy’s website.

 

Mons OCT 24 – Big Data et Vie privée -Vincent Blondel

 

vincent-blondelEn prélude à la Big Data Week 2016 :

Grande conférence de Vincent Blondel, recteur de l’UCL

Big data et Vie privée

Lundi 24 octobre à 19 heures, Au Mundaneum à Mons

Introduction par Monsieur Philippe Busquin, Ministre d’Etat et Commissaire européen chargé de la Recherche scientifique de 1999 à 2004

« L’Internet promeut nos libertés et est source de possibilités extraordinaires. En même temps, les technologies de l’information et de la communication créent des risques majeurs vis à vis de nos libertés et de la protection de notre vie privée. La surveillance sous toutes ses formes est devenue commune et les grands acteurs de l’internet et les États ne s’en privent pas. Les révélations de Snowden ont ouvert bien des yeux. Les technologies qui permettent de nous espionner peuvent pourtant aussi servir à nous protéger. Mais où trouver l’équilibre ? »

(Académie Royale de Belgique)

Vincent Blondel est recteur de l’Université catholique de Louvain depuis le 1er septembre 2014. Ses recherches sont à l’interface des mathématiques et des technologies de l’information. Il a obtenu un Master of Science à l’Imperial College à Londres et a réalisé des postdoctorats à Oxford, Stockholm et Paris. Il a été professeur invité au MIT, ainsi que Fulbright Scholar et a été invité à intervenir dans de nombreuses institutions, dont Stanford, Harvard, Princeton et Cambridge. Il a en outre collaboré à de nombreux projets transversaux à l’UCL.

Le débat à l’issue de la conférence sera modéré par André Blavier (Agence du Numérique).

Une conférence organisée par le Mundaneum en collaboration avec l’UCL et digital wallonia

Le Mundaneum est partenaire de l’année académique 2016-2017 de l’UCL placée sous le signe de l’ « Aventure scientifique ».

Adresse du jour : Mundaneum, rue de Nimy 76 à 7000 Mons (Belgique)

Inscription souhaitée : info@mundaneum.be ou 065/31.53.43

mundaneum