Job – external consultant – R & R-Shiny expertise – Brussels

R Shiny

Ref FL-Rshiny: Freelance, 3 months starting ASAP in Brussels.

For one of our Innovation Partner in Brussels we are looking for an external consultant, specialist in R code and R-Shiny.

The project aims at finding the communes where we have a profitable growth potential, and highlighting the brokers with who we should work more to reach our objectives.

One important aspect of this project is the visualisation of those information on a map. We choose the language R-Shiny to develop it but our knowledge of this language is perfectible and we would like to add new functionalities.

The consultant we are looking for must have a strong knowledge of R, R-Shiny, R-Leaflet and if possible Java Script. He will also need good pedagogical skills. Indeed, his mission won’t be to program alone, but rather to help upgrading the knowledge of the team through a good cooperation with them while programming. At last, the working language will be English.

Example of functionalities already in the tool:

  • Map showing the location of the different point of sales with some key features inside popup
  • Table summarizing some key variables about the communes and the point of sales + possibility to download those tables in an excel format
  • Value box, inside or outside the map, giving information about the main KPI of the geographical zone
  • Possibility to zoom on some strategic geographical zones on the map

Apply:

Make sure that you are a member of the Brussels Data Science Community linkedin group before you apply. Join  here.

Please note that we also manage other vacancies that are not public, if you want us to bring you in contact with them too, just send your CV to datasciencebe@gmail.com .

Please send your offer to pvanimpe@dihub.eu with ref FL-Rshiny.

How Tom won his first Kaggle competition

tom wins kaggle

This is a copy of Tom’s original post on Github.

Winning approach of the Facebook V Kaggle competition

The Facebook V: Predicting Check Ins data science competition where the goal was to predict which place a person would like to check in to has just ended. I participated with the goal of learning as much as possible and maybe aim for a top 10% since this was my first serious Kaggle competition attempt. I managed to exceed all expectations and finish 1st out of 1212 participants! In this post, I’ll explain my approach.

Overview

This blog post will cover all sections to go from the raw data to the winning submission. Here’s an overview of the different sections. If you want to skip ahead, just click the section title to go there.

The R source code is available on GitHub. This thread on the Kaggle forum discusses the solution on a higher level and is a good place to start if you participated in the challenge.

Introduction

Competition banner

Competition banner

From the competition page: The goal of this competition is to predict which place a person would like to check in to. For the purposes of this competition, Facebook created an artificial world consisting of more than 100,000 places located in a 10 km by 10 km square. For a given set of coordinates, your task is to return a ranked list of the most likely places. Data was fabricated to resemble location signals coming from mobile devices, giving you a flavor of what it takes to work with real data complicated by inaccurate and noisy values. Inconsistent and erroneous location data can disrupt experience for services like Facebook Check In.

The training data consists of approximately 29 million observations where the location (x, y), accuracy, and timestamp is given along with the target variable, the check in location. The test data contains 8.6 million observations where the check in location should be predicted based on the location, accuracy and timestamp. The train and test data set are split based on time. There is no concept of a person in this dataset. All the observations are events, not people.

A ranked list of the top three most likely places is expected for all test records. The leaderboard score is calculated using the MAP@3 criterion. Consequently, ranking the actual place as the most likely candidate gets a score of 1, ranking the actual place as the second most likely gets a score of 1/2 and a third rank of the actual place results in a score of 1/3. If the actual place is not in the top three of ranked places, a score of 0 is awarded for that record. The total score is the mean of the observation scores.

Check Ins where each place has a different color

Check Ins where each place has a different color

Exploratory analysis

Location analysis of the train check ins revealed interesting patterns between the variation in x and y. There appears to be way more variation in x than in y. It was suggested that this could be related to the streets of the simulated world. The difference in variation between x and y is however different for all places and there is no obvious spatial (x-y) pattern in this relationship.

It was quickly established by the community that time is measured in minutes and could thus be converted to relative hours and days of the week. This means that the train data covers 546 days and the test data spans 153 days. All places seem to live in independent time zones with clear hourly and daily patterns. No spatial pattern was found with respect to the time patterns. There are however two clear dips in the number of check ins during the train period.

Accuracy was by far the hardest input to interpret. It was expected that it would be clearly correlated with the variation in x and y but the pattern is not as obvious. Halfway through the competition I cracked the code and the details will be discussed in the Feature engineering section.

I wrote an interactive Shiny application to research these interactions for a subset of the places. Feel free to explore the data yourself!

Problem definition

The main difficulty of this problem is the extended number of classes (places). With 8.6 million test records there are about a trillion (10^12) place-observation combinations. Luckily, most of the classes have a very low conditional probability given the data (x, y, time and accuracy). The major strategy on the forum to reduce the complexity consisted of calculating a classifier for many x-y rectangular grids. It makes much sense to make use of the spatial information since this shows the most obvious and strong pattern for the different places. This approach makes the complexity manageable but is likely to lose a significant amount of information since the data is so variable. I decided to model the problem with a single binary classification model in order to avoid to end up with many high variance models. The lack of any major spatial patterns in the exploratory analysis supports this approach.

Strategy

Generating a single classifier for all place-observation combinations would be infeasible even with a powerful cluster. My approach consists of a stepwise strategy in which the conditional place probability is only modeled for a set of place candidates. A simplification of the overall strategy is shown below

High level strategy

High level strategy

The given raw train data is split in two chronological parts, with a similar ratio as the ratio between the train and test data. The summary period contains all given train observations of the first 408 days (minutes 0-587158). The second part of the given train data contains the next 138 days and will be referred to as the train/validation data from now on. The test data spans 153 days as mentioned before.

The summary period is used to generate train and validation features and the given train data is used to generate the same features for the test data.

The three raw data groups (train, validation and test) are first sampled down into batches that are as large as possible but can still be modeled with the available memory. I ended up using batches of approximately 30,000 observations on a 48GB workstation. The sampling process is fully random and results in train/validation batches that span the entire 138 days’ train range.

Next, a set of models is built to reduce the number of candidates to 20 using 15 XGBoost models in the second candidate selection step. The conditional probability P(place_match|features) is modeled for all ~30,000*100 place-observation combinations and the mean predicted probability of the 15 models is used to select the top 20 candidates for each observation. These models use features that combine place and observation measures of the summary period.

The same features are used to generate the first level learners. Each of the 100 first level learners are again XGBoost models that are built using ~30,000*20 feature-place_match pairs. The predicted probabilities P(place_match|features) are used as features of the second level learners along with 21 manually selected features. The candidates are ordered using the mean predicted probabilities of the 30 second level XGBoost learners.

All models are built using different train batches. Local validation is used to tune the model hyperparameters.

Candidate selection 1

The first candidate selection step reduces the number of potential classes from >100K to 100 by considering nearest neighbors of the observations. I considered the neighbor counts of the 2500 nearest neighbors where y variations are 2.5 times more important than x variations. Ties in the neighbor counts are resolved by the mean time difference since the observations. Resolving ties with the mean time difference is motivated by the shifts in popularity of the places.

The nearest neighbor counts are calculated efficiently by splitting up the data in overlapping rectangular grids. Grids are created as small as possible while still guaranteeing that the 2500 nearest neighbors fall within the grid in the worst case scenario. The R code is suboptimal through the use of several loops but the major bottleneck (ordering the distances) was reduced by a custom Rcpp package which resulted in an approximate 50% speed up. Improving the logic further was no major priority since the features were calculated on the background.

Feature engineering

Feature engineering strategy

Three weeks into the eight-week competition, I climbed to the top of the public leaderboard with about 50 features. Ever since I kept thinking of new features to capture the underlying patterns of the data. I also added features that are similar to the most important features in order to capture the subtler patterns. The final model contains 430 numeric features and this section is intended to discuss the majority of them.

There are two types of features. The first category relates to features that are calculated using only the summary data such as the number of historical check ins. The second and largest category combines summary data of the place candidates with the observation data. One such example is the historical density of a place candidate, one year prior to the observation.

All features are rescaled if needed in order to result in similar interpretations for the train and test features.

Location

The major share of my 430 features is based on nearest neighbor related features. The neighbor counts for different Ks (1, 5, 10, 20, 50, 100, 250, 500, 1000 and 2500) and different x-y ratio constants (1, 2.5, 4, 5.5, 7, 12 and 30) resulted in 10*7 features. For example: if a test observation has 3 of its 5 nearest neighbors of class A and 2 of its 5 nearest neighbors as class B, the candidate A will contain the numeric value of 3 for the K=5 feature, the candidate B will contain the numeric value of 2 for the K=5 feature and all other 18 candidates will contain the value of 0 for that feature. The mean time difference between a candidate and all 70 combinations resulted in 70 additional features. 10 more features were added by considering the distance between the Kth features and the observations for a ratio constant of 2.5. These features are an indication of the spatial density. 40 more features were added in a later iteration around the most significant nearest neighbor features. K was set at (35, 75, 100, 175, 375) for x-y ratio constants (0.4, 0.8, 1.3, 2, 3.2, 4.5, 6 and 8). The distances of all 40 combinations to the most distant neighbor were also added as features. Distance features are divided by the number of summary observations in order to have similar interpretations for the train and test features.

I further added several features that consider the (smoothed) spatial grid densities. Other location features relate to the place summaries such as the median absolute deviations and standard deviations in x and y. The ratio between the median absolute deviations was added as well. Features were relaxed using additive (Laplace) smoothing with different relaxation constants whenever it made sense using the relaxation constants 20 and 300. Consequently, the relaxed mad for a place with 300 summary observation is equal to the mean of the place mad and the weighted place population mad for a relaxation constant of 300.

Time

The second largest share of the features set belongs to time features. Here I converted all time period counts to period density counts in order to handle the two drops in the time frequency. Periods include 27 two-week periods prior to the end of the summary data and 27 1-week periods prior to the end of the summary data. I also included features that look at the two-week densities looking back between 75 and 1 weeks from the observations. These features resulted in missing values but XGBoost is able to handle them. Additional features were added for the clear yearly pattern of some places.

Weekly counts

Weekly counts

Hour, day and week features were calculated using the historical densities with and without cyclical smoothing and with or without relaxation. I suspected an interaction between the hour of the day and the day of the week and also added cyclical hour-day features. Features were added for daily 15-minute intervals as well. The cyclical smoothing is applied with Gaussian windows. The windows were chosen such that the smoothed hour, hour-week and 15-minute blocks capture different frequencies.

Other time features include extrapolated weekly densities using various time series models (arima, Holt-Winters and exponential smoothing). Further, the time since the end of the summary period was also added as well as the time between the end of the summary period and the last check in.

Accuracy

Understanding accuracy was the result of generating many plots. There is a significant but low correlation between accuracy and the variation in x and y but it is not until accuracy is binned in approximately equal sizes that the signal becomes visible. The signal is more accurate for accuracies in the 45-84 range (GPS data?).

Mean variation from the median in x versus 6 time and 32 accuracy groups

Mean variation from the median in x versus 6 time and 32 accuracy groups

The accuracy distribution seems to be a mixed distribution with three peaks which changes over time. It is likely to be related to three different mobile connection types (GPS, Wi-Fi or cellular). The places show different accuracy patterns and features were added to indicate the relative accuracy group densities. The middle accuracy group was set to the 45-84 range. I added relative place densities for 3 and 32 approximately equally sized accuracy bins. It was also discovered that the location is related to the three accuracy groups for many places. This pattern was captured by the addition of additional features for the different accuracy groups. A natural extension to the nearest neighbor calculation would incorporate the accuracy group but I did no longer have time to implement it.

The x-coordinates seem to be related to the accuracy group for places like 8170103882

The x-coordinates seem to be related to the accuracy group for places like 8170103882

Z-scores

Tens of z scores were added to indicate how similar a new observation is to the historical patterns in the place candidates. Robust Z-scores ((f-median(f))/mad(f) instead of (f-mean(f))/sd(f)) gave the best results.

Most important features

Nearest neighbors are the most important features for the studied models. The most significant nearest neighbor features appear around K=100 for distance constant ratios around 2.5. Hourly and daily densities were all found to be very important as well and the highest feature ranks are obtained after smoothing. Relative densities of the three accuracy groups also appear near the top of the most important features. An interesting feature that also appears at the top of the list relates to the daily density 52 weeks prior to the check in. There is a clear yearly pattern which is most obvious for places with the highest daily counts.

Clear yearly pattern for place 5872322184. The green line goes back 52 weeks since the highest daily count

Clear yearly pattern for place 5872322184. The green line goes back 52 weeks since the highest daily count

The feature files are about 800MB for each batch and I saved all the features to an external HD.

Candidate selection 2

The features from the previous section are used to generate binary classification models on 15 different train batches using XGBoost models. With 100 candidates for each observation, this is a slow process and it made sense to me to narrow down the number of candidates to 20 at this stage. I did not perform any downsampling in my final approach since all zeros (not a match between the candidate and true match) contain valuable information. XGBoost is able to handle unbalanced data quite well in my experience. I did however consider to omit observations that didn’t contain the true class in the top 100 but this resulted in slightly worse validation scores. The reasoning is the same as above: those values contain valuable information! The 15 candidate selection models are built with the top 142 features. The feature importance order is obtained by considering the XGBoost feature importance ranks of 20 models trained on different batches. Hyperparameters were selected using the local validation batches. The 15 second candidate selection models all generate a predicted probability of P(place_match|data), I average those to select the top 20 candidates in the second candidate selection step.

At this point I also dropped observations that belong to places that only have observations in the train/validation period. This filtering was also applied to the test set.

First level learners

The first level learners are very similar to the second candidate selection models other than the fact that they were fit on one fifth of the data for 75 of the 100 models. The other 25 models were fit on 100 candidates for each observation. The 100 base XGBoost learners were fit on different random parts of the training period. Deep trees gave me the best results here (depth 11) and the eta constant was set to (11 or 12)/500 for 500 rounds. Column sampling also helped (0.6) and subsampling the observations (0.5) did not hurt but of course resulted in a fitting speed increase. I included either all 430 features or a uniform random pick of the ordered features by importance in a desirable feature count range (100-285 and 180-240). The first level learner framework was created to handle multiple first level learner types other than XGBoost. I experimented with the nnet and H2O neural network implementations but those were either too slow in transferring the data (H2O) or too biased (nnet). The way XGBoost handles missing values is another great advantage over the mentioned neural network implementations. Also, the XGBoost models are quite variable since they are trained on different random train batches with differing hyperparameters (eta constant, number of features and the number of considered candidates (either 20 or 100)).

Second level learners

The 30 second level learners combine the predictions of the 100 first level models along with 21 manually selected features for all top 20 candidates. The 21 additional features are high level features such as the x, y and accuracy values as well as the time since the end of the summary period. The added value of the 21 features is very low but constant on the validation set and the public leaderboard (~0.02%). The best local validation score was obtained by considering moderate tree depths (depth 7) and the eta constant was set to 8/200 for 200 rounds. Column sampling also helped (0.6) and subsampling the observations (0.5) did not hurt but again resulted in a fitting speed increase. The candidates are ordered using the mean predicted probabilities of the 30 second level XGBoost learners.

Analysis of the local MAP@3 indicated better results for accuracies in the 45-84 range. The difference between local and test validation scores is in large part related to this observation. There seems to be a trend towards the use of devices that show less variation .

Local MAP@3 versus accuracy groups

Local MAP@3 versus accuracy groups

Conclusion

The private leaderboard standing below, used to rank the teams, shows the top 30 teams. It was a very close competition in the end and Markus would have been a well-deserved winner as well. We were very close to each other ever since the third week of the eight-week contest and pushed each other forward. The fact that the test data contains 8.6 million records and that it was split randomly for the private and public leaderboard resulted in a very confident estimate of the private standing given the public leaderboard. I was most impressed by the approaches of Markus and Jack (Japan) who finished in third position. You can read more about their approaches on the forum. Many others also contributed valuable insights.

Private leaderboard score (MAP@3) - two teams stand out from the pack

Private leaderboard score (MAP@3) – two teams stand out from the pack

I started the competition using a modest 8GB laptop but decided to purchase a €1500 workstation two weeks into the competition to speed up the modeling. Starting with limited resources ended up to be an advantage since it forced me to think of ways to optimize the feature generation logic. My big friend in this competition was the data.table package.

Running all steps on my 48GB workstation would take about a month. That seems like a ridiculously long time but it is explained by the extended computation time of the nearest neighbor features. While calculating the NN features I was continuously working on other parts of the workflow so speeding the NN logic up would not have resulted in a better final score.

Generating a ~.62 score could however be achieved in about two weeks by focusing on the most relevant NN features. I would suggest to consider 3 of the 7 distance constants (1, 2.5 and 4) and omit the mid KNN features. Cutting the first level models from 100 to 10 and the second level models from 30 to 5 would also not result in a strong performance decrease (estimated decrease of 0.1%) and cut the computation time to less than a week. You could of course run the logic on multiple instances and further speed things up.

I really enjoyed working on this competition even though it was already one of the busiest periods of my life. The competition was launched while I was in the middle of writing my Master’s Thesis in statistics in combination with a full time job. The data shows many interesting noisy and time dependent patterns which motivated me to play with the data before and after work. It was definitely worth every second of my time! I was inspired by the work of other Kaggle winners and successfully implemented my first two level model. Winning the competition is a nice extra but it’s even better to have learnt a lot from the other competitors, thank you all!

I look forward to your comments and suggestions, please go to my original post to post your comments.

Tom.

Job – Junior Data Scientist

Screenshot 2016-07-01 12.02.02

Are you pursuing a career in data science?

We have a great opportunity for you: an intensive training program combined with interesting job opportunities!

Interested? Check out http://di-academy.com/bootcamp/ follow the link to our datascience survey and send your cv to training@di-academy.com

Once selected, you’ll be invited for the intake event that will take place in Brussels this summer.

Hope to see you there,

Nele & Philippe

Training – Hands-on with R Shiny – November 26th

screen-shot-2014-09-18-at-10-09-14-pm

Shiny lets you create nice reactive web applications on top of R computations without any web development skills required. In this two-days Shiny course you will create your own sophisticated application and learn how to deploy it.

We start off easy by learning how to control the layout of your application and how to add widgets. Next up we will learn about reactivity. What options does your application have to react on user input: immediately or via submit buttons. During the course, applications will get more sophisticated by adding entire R scripts and input data. You will also learn how to return results to your users: displaying results, downloading pdf’s or other file formats, mailing results, … Moreover we will go into detail on how to interact with the results. For example via hovering over or clicking on plots.

At the end you will learn how to deploy your web application on shinyapps.io as well as on your own server.

logoThis training event is organised in collaboration with Oak3 (www.oak3.be). The Oak3 Academy is an IT Learning Center providing hands-on, intensive training and coaching to help students develop the skills they need for a successful career as an Information Technology Professional or as a knowledge worker (end-user of software). Our goal is to provide the highest quality training and knowledge transfer that enables a person to start or enhance his or her career as an IT professional or knowledge worker, in a short period of time. We therefore offer knowledge assimilation, facilitate expertise transfer and provide a rewarding learning experience. Our training solutions are designed to help students learn faster, master the latest information technologies and perform smarter.

Prerequisites: Previous experience with R is required, no HTML or CSS knowledge is needed.

When: Thursday, 26 November 2015 at 9:00 AM Friday, 27 November 2015 at 5:00 PM (CET)

Where: European Data Innovation Hub – 23 Vorstlaan Watermaal-Bosvoorde, Brussel 1170

Registration: Eventbrite

Training – Hands-on with SparkR – Brussels – November 24

sparkr_custom_logo

As of June 2015 SparkR is integrated in Spark-1.4.0. However this is still work in progress: in the original version, no Spark MLlib machine learning algorithms were accessible via R. In Spark-1.5.0 it is already possible to create generalized linear models (glm).

In this one-day SparkR course, you will understand how Spark is working under the hood (MapReduce paradigm, lazy evaluation, …) and learn how to use SparkR. You will start setting up a local Spark cluster and access it via R. Next up you will learn basic data transformations in SparkR, either via R code or via SparkSql. Finally we will use SparkR’s glm and compare it to R’s glm and we will implement our own machine learning algorithm.

logoThis training event is organise in collaboration with Oak3 (http://www.oak3.be). The Oak3 Academy is an IT Learning Center providing hands-on, intensive training and coaching to help students develop the skills they need for a successful career as an Information Technology Professional or as a knowledge worker (end-user of software). Our goal is to provide the highest quality training and knowledge transfer that enables a person to start or enhance his or her career as an IT professional or knowledge worker, in a short period of time. We therefore offer knowledge assimilation, facilitate expertise transfer and provide a rewarding learning experience. Our training solutions are designed to help students learn faster, master the latest information technologies and perform smarter.

Prerequisites: Previous experience with R is required, notions of Apache Spark are useful but not required.

When: Tuesday, November 24, 2015 from 9:00 AM to 5:00 PM (CET)

Where: European Data Innovation Hub – 23 Vorstlaan Watermaal-Bosvoorde, Brussel 1170 BE

Registration: Eventbrite

The ABC of Datascience blogs – collaborative update

abc-letters-on-white-sandra-cunningham

A – ACID – Atomicity, Consistency, Isolation and Durability

B – Big Data – Volume, Velocity, Variety

C – Columnar (or Column-Oriented) Database

  • CoolData By Kevin MacDonell on Analytics, predictive modeling and related cool data stuff for fund-raising in higher education.
  • Cloud of data blog By Paul Miller, aims to help clients understand the implications of taking data and more to the Cloud.
  • Calculated Risk, Finance and Economics

D – Data Warehousing – Relevant and very useful

E – ETL – Extract, transform and load

F – Flume – A framework for populating Hadoop with data

  • Facebook Data Science Blog, the official blog of interesting insights presented by Facebook data scientists.
  • FiveThirtyEight, by Nate Silver and his team, gives a statistical view of everything from politics to science to sports with the help of graphs and pie charts.
  • Freakonometrics Charpentier, a professor of mathematics, offers a nice mix of generally accessible and more challenging posts on statistics related subjects, all with a good sense of humor.
  • Freakonomics blog, by Steven Levitt and Stephen J. Dubner.
  • FastML, covering practical applications of machine learning and data science.
  • FlowingData, the visualization and statistics site of Nathan Yau.

G – Geospatial Analysis – A picture worth 1,000 words or more

H – Hadoop, HDFS, HBASE

  • Harvard Data Science, thoughts on Statistical Computing and Visualization.
  • Hyndsight by Rob Hyndman, on fore­cast­ing, data visu­al­iza­tion and func­tional data.

I – In-Memory Database – A new definition of superfast access

  • IBM Big Data Hub Blogs, blogs from IBM thought leaders.
  • Insight Data Science Blog on latest trends and topics in data science by Alumnus of Insight Data Science Fellows Program.
  • Information is Beautiful, by Independent data journalist and information designer David McCandless who is also the author of his book ‘Information is Beautiful’.
  • Information Aesthetics designed and maintained by Andrew Vande Moere, an Associate Professor at KU Leuven university, Belgium. It explores the symbiotic relationship between creative design and the field of information visualization.
  • Inductio ex Machina by Mark Reid’s research blog on machine learning & statistics.

J – Java – Hadoop gave it a nice push

  • Jonathan Manton’s blog by Jonathan Manton, Tutorial-style articles in the general areas of mathematics, electrical engineering and neuroscience.
  • JT on EDM, James Taylor on Everything Decision Management
  • Justin Domke blog, on machine learning and computer vision, particularly probabilistic graphical models.
  • Juice Analytics on analytics and visualization.

K – Kafka – High-throughput, distributed messaging system originally developed at LinkedIn

L – Latency – Low Latency and High Latency

  • Love Stats Blog By Annie, a market research methodologist who blogs about sampling, surveys, statistics, charts, and more
  • Learning Lover on programming, algorithms with some flashcards for learning.
  • Large Scale ML & other Animals, by Danny Bickson, started the GraphLab, an award winning large scale open source project

M – Map/Reduce – MapReduce

N – NoSQL Databases – No SQL Database or Not Only SQL

O – Oozie – Open-source workflow engine managing Hadoop job processing

  • Occam’s Razor by Avinash Kaushik, examining web analytics and Digital Marketing.
  • OpenGardens, Data Science for Internet of Things (IoT), by Ajit Jaokar.
  • O’reilly Radar O’Reilly Radar, a wide range of research topics and books.
  • Oracle Data Mining Blog, Everything about Oracle Data Mining – News, Technical Information, Opinions, Tips & Tricks. All in One Place.
  • Observational Epidemiology A college professor and a statistical consultant offer their comments, observations and thoughts on applied statistics, higher education and epidemiology.
  • Overcoming bias By Robin Hanson and Eliezer Yudkowsky. Present Statistical analysis in reflections on honesty, signaling, disagreement, forecasting and the far future.

P – Pig – Platform for analyzing huge data sets

  • Probability & Statistics Blog By Matt Asher, statistics grad student at the University of Toronto. Check out Asher’s Statistics Manifesto.
  • Perpetual Enigma by Prateek Joshi, a computer vision enthusiast writes question-style compelling story reads on machine learning.
  • PracticalLearning by Diego Marinho de Oliveira on Machine Learning, Data Science and Big Data.
  • Predictive Analytics World blog, by Eric Siegel, founder of Predictive Analytics World and Text Analytics World, and Executive Editor of the Predictive Analytics Times, makes the how and why of predictive analytics understandable and captivating.

Q – Quantitative Data Analysis

R – Relational Database – Still relevant and will be for some time

  • R-bloggers , best blogs from the rich community of R, with code, examples, and visualizations
  • R chart A blog about the R language written by a web application/database developer.
  • R Statistics By Tal Galili, a PhD student in Statistics at the Tel Aviv University who also works as a teaching assistant for several statistics courses in the university.
  • Revolution Analytics hosted, and maintained by Revolution Analytics.
  • Rick Sherman: The Data Doghouse on business and technology of performance management, business intelligence and datawarehousing.
  • Random Ponderings by Yisong Yue, on artificial intelligence, machine learning & statistics.

S – Sharding (Database Partitioning)  and Sqoop (SQL Database to Hadoop)

  • Salford Systems Data Mining and Predictive Analytics Blog, by Dan Steinberg.
  • Sabermetric Research By Phil Burnbaum blogs about statistics in baseball, the stock market, sports predictors and a variety of subjects.
  • Statisfaction A blog by jointly written by PhD students and post-docs from Paris (Université Paris-Dauphine, CREST). Mainly tips and tricks useful in everyday jobs, links to various interesting pages, articles, seminars, etc.
  • Statistically Funny True to its name, epidemiologist Hilda Bastian’s blog is a hilarious account of the science of unbiased health research with the added bonus of cartoons.
  • SAS Analysis, a weekly technical blog about data analysis in SAS.
  • SAS blog on text mining on text mining, voice mining and unstructured data by SAS experts.
  • SAS Programming for Data Mining Applications, by LX, Senior Statistician in Hartford, CT.
  • Shape of Data, presents an intuitive introduction to data analysis algorithms from the perspective of geometry, by Jesse Johnson.
  • Simply Statistics By three biostatistics professors (Jeff Leek, Roger Peng, and Rafa Irizarry) who are fired up about the new era where data are abundant and statisticians are scientists.
  • Smart Data Collective, an aggregation of blogs from many interesting data science people
  • Statistical Modeling, Causal Inference, and Social Science by Andrew Gelman
  • Stats with Cats By Charlie Kufs has been crunching numbers for over thirty years, first as a hydrogeologist and since the 1990s, as a statistician. His tagline is- when you can’t solve life’s problems with statistics alone.
  • StatsBlog, a blog aggregator focused on statistics-related content, and syndicates posts from contributing blogs via RSS feeds.
  • Steve Miller BI blog, at Information management.

T – Text Analysis – Larger the information, more needed analysis

U – Unstructured Data – Growing faster than speed of thoughts

V – Visualization – Important to keep the information relevant

  • Vincent Granville blog. Vincent, the founder of AnalyticBridge and Data Science Central, regularly posts interesting topics on Data Science and Data Mining

W – Whirr – Big Data Cloud Services i.e. Hadoop distributions by cloud vendors

X – XML – Still eXtensible and no Introduction needed

  • Xi’an’s Og Blog A blog written by a professor of Statistics at Université Paris Dauphine, mainly centred on computational and Bayesian topics.

Y – Yottabyte – Equal to 1,000 exabytes, 1 million petabytes and 1 billion terabytes

Z – Zookeeper – Help managing Hadoop nodes across a distributed network

Feel free to add your preferred blog in the comment bellow.

Other resources:

Nice video channels:

More Jobs ?

hidden-jobs1

Click here for more Data related job offers.
Join our community on linkedin and attend our meetups.
Follow our twitter account: @datajobsbe

Improve your skills:

Why don’t you join one of our  #datascience trainings in order to sharpen your skills.

Special rates apply if you are a job seeker.

Here are some training highlights for the coming months:

Check out the full agenda here.

Join the experts at our Meetups:

Each month we organize a Meetup in Brussels focused on a specific DataScience topic.

Brussels Data Science Meetup

Brussels, BE
1,417 Business & Data Science pro’s

The Brussels Data Science Community:Mission:  Our mission is to educate, inspire and empower scholars and professionals to apply data sciences to address humanity’s grand cha…

Next Meetup

DATA UNIFICATION IN CORPORATE ENVIRONMENTS

Wednesday, Oct 14, 2015, 6:30 PM
57 Attending

Check out this Meetup Group →

Training – Machine learning with R – September 28 & 29 – Brussels

logo

Next week Monday and Tuesday:

Join us for a 2 days Machine learning course using R. Special discounts apply, contact @pvanimpe for more details.

Please register and pay through Eventbrite

A must have skill for all datascientist

This course is a hands-on course covering the use of statistical machine learning methods available in R.

The following basic learning methods will be covered and used on common datasets.

  • classification trees (rpart)
  • feed-forward neural networks and multinomial regression
  • random forests
  • boosting for classification and regression
  • bagging for classification and regression
  • penalized regression modelling (lasso/ridge regularized generalized linear models)
  • model based recursive partitioning (trees with statistical models at the nodes)
  • training and evaluation will be done through the use of the caret and ROCR packages

Course duration: 2 days.

Business-Analytics-with-R

Instructor: Jan Wijfels, BNOSAC

jan wijffels Jan Wijffels is the founder ofwww.bnosac.be – a consultancy company specialised in statistical analysis and data mining. He holds a Master in Commercial Engineering, a MSc in Statistics and a Master in Artificial Intelligence and has been using R for 8 years, developing and deploying R-based solutions for clients in the private sector. He has developed and co-developed the R packages ETLUtils and ffbase.

Job – ULB – Postdoc in Machine Learning – 2 years

ULB

ULB MLG Brussels: Postdoc in machine learning, data science and big data for security (e.g. fraud detection)

2 year postdoc position

Description

Research in big data and scalable machine learning with application to security problems (e.g. fraud detection) in the context of a project funded by Brussels Region.

http://mlg.ulb.ac.be/BruFence

Required skills:

  • you have a PhD in Machine Learning, Computational Science, (Bio)Engineering, Data Science, or equivalent.
  • Expertise in statistical machine learning, data mining, big data, map reduce, Spark, python, R programming.
  • Plus: expertise in application of big data mining to real problems, security applications, notably credit card fraud detection
  • You are fluent in English.
  • The successful applicant will be hosted by the Machine Learning Group, co-headed by Prof. Gianluca Bontempi.

    Starting date: asap
    For more information please contact Pr. Gianluca Bontempi, mail: gbonte@ulb.ac.be.
    Please send your CV, motivation letter and contact information for three references, publication list with indication of the citation number of each published paper.

Nr of positions available : 1

Research Fields

Computer science – Modelling tools

Career Stage

Experienced researcher or 4-10 yrs (Post-Doc)

Research Profiles

First Stage Researcher (R1)

Comment/web site for additional job details

mlg.ulb.ac.be

Job – Infofarm – DataScientist

Infofarm

InfoFarm breidt uit en is op zoek naar een nieuwe Data Scientist!

BEDRIJFSPROFIEL

InfoFarm is een Data Science bedrijf dat zich toespitst in het opleveren van kwaliteitsvolle Data Science en Big Data oplossingen aan haar klanten. Onze naam danken we aan één van de vele informele brainstormsessies onder collega’s die spontaan tijdens de middagpauze ontstaan. Een gezellige sessie later hadden we de hele analogie met het boerderijleven op poten: we planten ideeën, we ploegen door onze klant zijn data, laten deze groeien met andere data of inzichten en oogsten business waarde door er verschillende (machine learning) technieken op toe te passen.

We hebben een uniek team met verscheidene talenten en verschillende achtergronden: Data Scientists (mensen met een onderzoek achtergrond uit een kwantitatieve richting, Big Data Developers (sterk technische Java programmeurs) en Infrastructuurmensen (de bits-and-bytes mensen). Wij ontwikkelen samen geweldige oplossingen voor onze klanten uit verschillende sectoren. Om ons team te versterken zijn we op zoek naar een Big Data Developer. 

FUNCTIEOMSCHRIJVING

Als Data Scientist exploreer je datasets, verschaf je inzichten en help je klanten actie te ondernemen gebaseerd op deze inzichten. Je werkt zelfstandig of in een gemengd team, ofwel in onze kantoren ofwel in detachering bij de klant. Je bent niet bang om met creatieve oplossingen voor complexe problemen naar voren te treden. Je gidst onze Big Data Developers in het bouwen van Big Data applicaties gebaseerd op de inzichten die jij verkregen hebt. Je zal in verschillende sectoren en omgevingen belanden. De ene dag werk je voor een telecom bedrijf, om de dag nadien het waterzuivering systeem van België beter te leren kennen en ten slotte ook nog een Big Data applicatie in de logistieke sector te bouwen. Bij InfoFarm zijn geen twee projecten gelijkaardig, maar dat schrikt je niet af. Je kijkt er naar uit om bij te leren over verschillende businessen en om nieuwe ontwikkelingen en technologieën op de markt te volgen, alsook om  deze opgedane kennis uit te dragen naar onze klanten en binnen het team. 

FUNCTIEVEREISTEN

  • Je hebt een master diploma in een kwantitatieve richting (wiskunde, ingenieur, …). Een doctoraat is een pluspunt.
  • Kennis van een data analyse taal (R, Python, …) geeft je een voorsprong. Bereidheid om een van deze talen te leren is een vereiste.
  • Kennis van SQL is een voordeel.
  • Het leren begrijpen van Big Data tools (Hadoop, Hive, Pig, Spark, Spark MLlib, …) schrikt je niet af.
  • Kennis van Java en Scala vormen een meerwaarde.

Apply:

Make sure that you are a member of the Brussels Data Science Community linkedin group before you apply. Join  here.

Please note that we also manage other vacancies that are not public, if you want us to bring you in contact with them too, just send your CV to datasciencebe@gmail.com .

Bekijk de volledige job informatie hieronder en stuur als antwoord je CV naar jobs@infofarm.be!

(An English version can be requested via jobs@infofarm.be)

check out the original post: http://www.infofarm.be/articles/were-hiring-data-scientist

Job – Infofarm – Big Data Developer

Infofarm

InfoFarm breidt uit en is op zoek naar een nieuwe Big Data Developer!

BEDRIJFSPROFIEL

InfoFarm is een Data Science bedrijf dat zich toespitst in het opleveren van kwaliteitsvolle Data Science en Big Data oplossingen aan haar klanten. Onze naam danken we aan één van de vele informele brainstormsessies onder collega’s die spontaan tijdens de middagpauze ontstaan. Een gezellige sessie later hadden we de hele analogie met het boerderijleven op poten: we planten ideeën, we ploegen door onze klant zijn data, laten deze groeien met andere data of inzichten en oogsten business waarde door er verschillende (machine learning) technieken op toe te passen.

We hebben een uniek team met verscheidene talenten en verschillende achtergronden: Data Scientists (mensen met een onderzoek achtergrond uit een kwantitatieve richting, Big Data Developers (sterk technische Java programmeurs) en Infrastructuurmensen (de bits-and-bytes mensen). Wij ontwikkelen samen geweldige oplossingen voor onze klanten uit verschillende sectoren. Om ons team te versterken zijn we op zoek naar een Big Data Developer. 

FUNCTIEOMSCHRIJVING

Als Big Data Developer ontwikkel je voornamelijk Big Data applicaties op het Apache Hadoop of Apache Spark platform. Je werkt zelfstandig of in een gemengd team, ofwel in onze kantoren ofwel in detachering bij de klant. Je bent niet bang om met creatieve oplossingen voor complexe problemen naar voren te treden. De ene dag werk je voor een telecom bedrijf, om de dag nadien het waterzuivering systeem van België beter te leren kennen en ten slotte ook nog een Big Data applicatie in de logistieke sector te bouwen. Bij InfoFarm zijn geen twee projecten gelijkaardig, maar dat schrikt je niet af. Je kijkt er naar uit om bij te leren over verschillende businessen en om nieuwe ontwikkelingen en technologieën op de markt te volgen, alsook om  deze opgedane kennis uit te dragen naar onze klanten en binnen het team. 

FUNCTIEVEREISTEN

Je hebt minstens 2-3 jaar ervaring met Java ontwikkeling. Certificaties vormen een meerwaarde.

Je kan werken met Maven, Spring of EJB en één of meer RDBMS.

Kennis van Hadoop, Hive en Pig zijn een pluspunt, net als kennis van Spark en Spark MLlib. Bereidheid om je te certifiëren in een van deze domeinen is noodzakelijk.

Kennis van R en Scala zijn een voordeel.

Je hebt op zijn minst een Bachelor in Applied Computer Sciences. 

Apply:

Make sure that you are a member of the Brussels Data Science Community linkedin group before you apply. Join  here.

Please note that we also manage other vacancies that are not public, if you want us to bring you in contact with them too, just send your CV to datasciencebe@gmail.com .

Bekijk de volledige job informatie hieronder en stuur als antwoord je CV naar jobs@infofarm.be!

(An English version can be requested via jobs@infofarm.be)

check out the original post: http://www.infofarm.be/articles/were-hiring-big-data-developer-0