Next Executive Session: What about a debate on Big Data & Ethics ?

Pierre Nicolas Schwab -EBU_Big_Data

 

Next Executive Sessions :

After the interesting talk of Stephen Brobst CTO#4 we now want to organise a debate on Big Data & Ethics. Pierre-Nicolas is proposing the following format: Short presentations by each participant representing the vision of his/her company followed by a debate. Any suggestions ? Who wants to participate ?

 

Philippe, Puis-je te proposer que tu organises un atelier de travail avec une dizaine de responsables Big Data sur l’éthique et le Big Data ? 
Il faudrait limiter l’audience à des décisionnaires Big Data qui puissent vraiment représenter le point de vue de leur société.
En termes de structuration :
 – 10-15 min (TED format) de présentation de chaque société sur les enjeux éthiques, sur la manière dont les algorithms sont conçus pour tenir compte de ces aspects éthiques, sur les enjeux et limites du Big Data, ce que les entreprises s’interdisent de faire en termes de collecte de données (la “ligne rouge” à ne pas dépasser)
 – un atelier d’échanges / débat

 


Here is some inspiration on the topic.

The RTBF’s big data expert Pierre Nicolas Schwab reflects on how to develop algorithms for PSM which match their values and mitigate the potential flaws of artificial intelligence. He invites EBU Big Data Initiative participants to join the RTBF workshop addressing this issue this December in Brussels.

“Most large companies are investing massively in Big Data technologies to leverage the value of their data. While many still consider Big Data as an inescapable business trend, concerns are growing regarding the impact of Big Data on our daily lives.

A ‘man vs machine’ Artificial Intelligence (AI) milestone was reached in March this year when the deep-learning algorithm AlphaGo defeated one of the world’s best players at Go, Lee Sedol. In an article I published before the game, I was wondering how advances in AI were changing our lives. Cathy O’Neil, a Harvard PhD mathematician, expressed similar concerns at the USI conference in Paris in early June and will be releasing her book “Weapons of Math Destructions” in September. This book elaborates on the concerns she has expressed on her blog about ill-based decisions triggered by algorithms and how big data “increases inequality and threatens democracy”. The title may be provocative but promises to go beyond the filter bubble effect made popular by Eli Pariser.

Because algorithms are only as good as those who build them, we need to open up the models, and not only the data. Those models need to be subject to criticism, peer-review and third-party scrutiny. This will avoid the use of biased or even dangerous algorithms (e.g. the French universities selection algorithm scandal revealed earlier this month) and will increase people’s trust in organizations which use algorithms. To illustrate the latter, the French fiscal authorities are now forced to render public the criteria that play a role when submitting a tax payer to a control. This exemplifies that a change is ongoing and, as PSM, we must embrace and support it.

Not only must we avoid replicating flawed models (in particular recommendation algorithms that trap users in “filter bubbles”) but we, as public organizations, have duties towards our democratic societies and their citizens. That’s why we need to (1) engage in a global reflection on how our algorithms need to be shaped to reflect our values and (2) pave the way for better practices that will inspire other organizations in different industries.

 

Hackathon – Aug 30-31 – Consumer Goods

Screenshot 2016-08-16 15.41.34.png

Dear Data Science Community members,

Hope you are ALL doing great! With this message, we are pleased to inform you of the upcoming Consumer Goods Hackathon, which will take place on Aug 30th– 31st in our innovation campus in Strombeek-Bever.

Together with several other large non-competing companies, we are organizing a 2 day digital disruption hackathon for consumer goods. Deloitte is leading and currently AB-Inbev, ING, Eggsplore, bpost and P&G are joining from large companies. Other companies are still considering to join.

The large companies will share their challenges in 1.digital disruption on business models and approaches, 2. production processes and 3. packaging. Start-ups/SME’s will share and collaborate (in teams) on novel disruptive solutions. If desired, the larger companies can work own projects with interested start-ups /SME’s after the event. 2D sketching, 3D printing and film/animation support will be provided on site for the teams to build their proposition.

Participation to the event is free of charge. This is the website with further details, with the specific business challenges for each of the 3 topics and for registration: consumergoodshackathon .

For practical reasons, it’s required that participants register individually and specify which challenge they would like to work on. (click challenges to see the full list).

It would be great if you could spread the word and pass on the website address to specific start-ups in your network which you find relevant for the defined spaces.

Let me know if you have further questions or want to talk this with the broader team.

Thanks in advance and looking forward to your feedback.

Best regards.

Benjamin Bollot @P&G

Job – Base 3 – Business Intelligence Consultant / Data Analytics / ETL / DWH

Base_3_Systems_1190575396_63

Hello Philippe,

at Base 3 we’re on a course for growth and looking for people to be a part of it. We’re hiring 12 people in Data Analytics. Mostly around Microsoft technologies (SSIS,SSAS,SSRS,…) and SAS. But we’re open to other technologies too. Diversity is important to us as is personal attention.
So a job description can be found on https://www.ictjob.be/nl/vacature/base-3-business-intelligence-consultant-data-analytics-etl-dwh/1-78127.
I’m off course open to explain more in a first conversation.

Thanks,
Steve


Base 3 – Business Intelligence Consultant / Data Analytics / ETL / DWH

We are growing our business and looking to hire 12 data inspired professionals.

What makes a great Base 3 Consultant?

Our consultants produce business value for our clients through the use of business intelligence, advanced analytics or big data techniques.

What makes them great is that they have a real passion for data that works infectious on the people around them. They’re strong communicators and work together with our clients sharing knowledge and ideas along the way.

What does a Consultant do?

There is a huge diversity in the type of roles and projects our consultants work on. Always assessing and adapting to the need of the customer.

A grasp out of the different things you could do throughout the day

  • Interpret data, analyze results and provide ongoing reports and clear visualizations.
  • Develop and implement data collection systems and other strategies that optimize data-driven decisions.
  • Acquire data from primary or Secondary data sources and maintain databases/data systems.
  • Identify, analyze, and interpret trends or patterns in complex data sets.
  • Filter and “clean” data, and review computer reports, printouts, and performance indicators to locate and correct code problems.
  • Work closely with management to prioritize business and information needs.
  • Locate and define new improvement opportunities.

 

What does Base 3 have to offer?

Besides the possibility to work on fantastic projects for great customers, we offer

  • A competitive salary with great benefits including a company car, group and health insurance.
  • 32 days holiday.
  • The possibility to build your skills and expertise.
  • A company that values long-term relationships with staff, customers and partners. Relationships based on honest collaboration, mutual trust and respect.

Feel free to contact us if you’d like to know more about our company, culture, colleagues,…

The Company

Founded in 1996, Base 3 has 20 years of experience in the data analytics market in Belgium.

Our consultants have a passion for data. They collaborate with our customers to improve their data-driven decisions. To produce business value, they use business intelligence, advanced analytics or big data techniques. Through their expertise and talent, they produce premium deliverables.

Our customers are our biggest source of inspiration. Their data-related questions and assignments are the challenges that excite our consultants.

When passion meets purpose ….

Our Values

Our long-term relationships with staff, customers and partners are based upon honest collaboration, mutual trust and respect.

Apply:

Make sure that you are a member of the Brussels Data Science Community linkedin group before you apply. Join  here.

Please note that we also manage other vacancies that are not public, if you want us to bring you in contact with them too, just send your CV to datasciencebe@gmail.com .

For further information or to apply for this vacancy, please contact Steven Boen and include your CV.

Summer Data Science activities in Belgium.

summer edition

We wish you happy holidays, in case you get bored check out our educational channel on youtube.

The European Data Innovation Hub is active during the summer.
Here is a short update from what to expect in the coming weeks:

Thank you for supporting the European Data Innovation Hub, we had a great academic year.

Philippe Van Impe
@pvanimpe
www.di-academy.com

Please forward the information about the data science bootcamp to your peers and friends.

Job – external consultant – R & R-Shiny expertise – Brussels

R Shiny

Ref FL-Rshiny: Freelance, 3 months starting ASAP in Brussels.

For one of our Innovation Partner in Brussels we are looking for an external consultant, specialist in R code and R-Shiny.

The project aims at finding the communes where we have a profitable growth potential, and highlighting the brokers with who we should work more to reach our objectives.

One important aspect of this project is the visualisation of those information on a map. We choose the language R-Shiny to develop it but our knowledge of this language is perfectible and we would like to add new functionalities.

The consultant we are looking for must have a strong knowledge of R, R-Shiny, R-Leaflet and if possible Java Script. He will also need good pedagogical skills. Indeed, his mission won’t be to program alone, but rather to help upgrading the knowledge of the team through a good cooperation with them while programming. At last, the working language will be English.

Example of functionalities already in the tool:

  • Map showing the location of the different point of sales with some key features inside popup
  • Table summarizing some key variables about the communes and the point of sales + possibility to download those tables in an excel format
  • Value box, inside or outside the map, giving information about the main KPI of the geographical zone
  • Possibility to zoom on some strategic geographical zones on the map

Apply:

Make sure that you are a member of the Brussels Data Science Community linkedin group before you apply. Join  here.

Please note that we also manage other vacancies that are not public, if you want us to bring you in contact with them too, just send your CV to datasciencebe@gmail.com .

Please send your offer to pvanimpe@dihub.eu with ref FL-Rshiny.

How Tom won his first Kaggle competition

tom wins kaggle

This is a copy of Tom’s original post on Github.

Winning approach of the Facebook V Kaggle competition

The Facebook V: Predicting Check Ins data science competition where the goal was to predict which place a person would like to check in to has just ended. I participated with the goal of learning as much as possible and maybe aim for a top 10% since this was my first serious Kaggle competition attempt. I managed to exceed all expectations and finish 1st out of 1212 participants! In this post, I’ll explain my approach.

Overview

This blog post will cover all sections to go from the raw data to the winning submission. Here’s an overview of the different sections. If you want to skip ahead, just click the section title to go there.

The R source code is available on GitHub. This thread on the Kaggle forum discusses the solution on a higher level and is a good place to start if you participated in the challenge.

Introduction

Competition banner

Competition banner

From the competition page: The goal of this competition is to predict which place a person would like to check in to. For the purposes of this competition, Facebook created an artificial world consisting of more than 100,000 places located in a 10 km by 10 km square. For a given set of coordinates, your task is to return a ranked list of the most likely places. Data was fabricated to resemble location signals coming from mobile devices, giving you a flavor of what it takes to work with real data complicated by inaccurate and noisy values. Inconsistent and erroneous location data can disrupt experience for services like Facebook Check In.

The training data consists of approximately 29 million observations where the location (x, y), accuracy, and timestamp is given along with the target variable, the check in location. The test data contains 8.6 million observations where the check in location should be predicted based on the location, accuracy and timestamp. The train and test data set are split based on time. There is no concept of a person in this dataset. All the observations are events, not people.

A ranked list of the top three most likely places is expected for all test records. The leaderboard score is calculated using the MAP@3 criterion. Consequently, ranking the actual place as the most likely candidate gets a score of 1, ranking the actual place as the second most likely gets a score of 1/2 and a third rank of the actual place results in a score of 1/3. If the actual place is not in the top three of ranked places, a score of 0 is awarded for that record. The total score is the mean of the observation scores.

Check Ins where each place has a different color

Check Ins where each place has a different color

Exploratory analysis

Location analysis of the train check ins revealed interesting patterns between the variation in x and y. There appears to be way more variation in x than in y. It was suggested that this could be related to the streets of the simulated world. The difference in variation between x and y is however different for all places and there is no obvious spatial (x-y) pattern in this relationship.

It was quickly established by the community that time is measured in minutes and could thus be converted to relative hours and days of the week. This means that the train data covers 546 days and the test data spans 153 days. All places seem to live in independent time zones with clear hourly and daily patterns. No spatial pattern was found with respect to the time patterns. There are however two clear dips in the number of check ins during the train period.

Accuracy was by far the hardest input to interpret. It was expected that it would be clearly correlated with the variation in x and y but the pattern is not as obvious. Halfway through the competition I cracked the code and the details will be discussed in the Feature engineering section.

I wrote an interactive Shiny application to research these interactions for a subset of the places. Feel free to explore the data yourself!

Problem definition

The main difficulty of this problem is the extended number of classes (places). With 8.6 million test records there are about a trillion (10^12) place-observation combinations. Luckily, most of the classes have a very low conditional probability given the data (x, y, time and accuracy). The major strategy on the forum to reduce the complexity consisted of calculating a classifier for many x-y rectangular grids. It makes much sense to make use of the spatial information since this shows the most obvious and strong pattern for the different places. This approach makes the complexity manageable but is likely to lose a significant amount of information since the data is so variable. I decided to model the problem with a single binary classification model in order to avoid to end up with many high variance models. The lack of any major spatial patterns in the exploratory analysis supports this approach.

Strategy

Generating a single classifier for all place-observation combinations would be infeasible even with a powerful cluster. My approach consists of a stepwise strategy in which the conditional place probability is only modeled for a set of place candidates. A simplification of the overall strategy is shown below

High level strategy

High level strategy

The given raw train data is split in two chronological parts, with a similar ratio as the ratio between the train and test data. The summary period contains all given train observations of the first 408 days (minutes 0-587158). The second part of the given train data contains the next 138 days and will be referred to as the train/validation data from now on. The test data spans 153 days as mentioned before.

The summary period is used to generate train and validation features and the given train data is used to generate the same features for the test data.

The three raw data groups (train, validation and test) are first sampled down into batches that are as large as possible but can still be modeled with the available memory. I ended up using batches of approximately 30,000 observations on a 48GB workstation. The sampling process is fully random and results in train/validation batches that span the entire 138 days’ train range.

Next, a set of models is built to reduce the number of candidates to 20 using 15 XGBoost models in the second candidate selection step. The conditional probability P(place_match|features) is modeled for all ~30,000*100 place-observation combinations and the mean predicted probability of the 15 models is used to select the top 20 candidates for each observation. These models use features that combine place and observation measures of the summary period.

The same features are used to generate the first level learners. Each of the 100 first level learners are again XGBoost models that are built using ~30,000*20 feature-place_match pairs. The predicted probabilities P(place_match|features) are used as features of the second level learners along with 21 manually selected features. The candidates are ordered using the mean predicted probabilities of the 30 second level XGBoost learners.

All models are built using different train batches. Local validation is used to tune the model hyperparameters.

Candidate selection 1

The first candidate selection step reduces the number of potential classes from >100K to 100 by considering nearest neighbors of the observations. I considered the neighbor counts of the 2500 nearest neighbors where y variations are 2.5 times more important than x variations. Ties in the neighbor counts are resolved by the mean time difference since the observations. Resolving ties with the mean time difference is motivated by the shifts in popularity of the places.

The nearest neighbor counts are calculated efficiently by splitting up the data in overlapping rectangular grids. Grids are created as small as possible while still guaranteeing that the 2500 nearest neighbors fall within the grid in the worst case scenario. The R code is suboptimal through the use of several loops but the major bottleneck (ordering the distances) was reduced by a custom Rcpp package which resulted in an approximate 50% speed up. Improving the logic further was no major priority since the features were calculated on the background.

Feature engineering

Feature engineering strategy

Three weeks into the eight-week competition, I climbed to the top of the public leaderboard with about 50 features. Ever since I kept thinking of new features to capture the underlying patterns of the data. I also added features that are similar to the most important features in order to capture the subtler patterns. The final model contains 430 numeric features and this section is intended to discuss the majority of them.

There are two types of features. The first category relates to features that are calculated using only the summary data such as the number of historical check ins. The second and largest category combines summary data of the place candidates with the observation data. One such example is the historical density of a place candidate, one year prior to the observation.

All features are rescaled if needed in order to result in similar interpretations for the train and test features.

Location

The major share of my 430 features is based on nearest neighbor related features. The neighbor counts for different Ks (1, 5, 10, 20, 50, 100, 250, 500, 1000 and 2500) and different x-y ratio constants (1, 2.5, 4, 5.5, 7, 12 and 30) resulted in 10*7 features. For example: if a test observation has 3 of its 5 nearest neighbors of class A and 2 of its 5 nearest neighbors as class B, the candidate A will contain the numeric value of 3 for the K=5 feature, the candidate B will contain the numeric value of 2 for the K=5 feature and all other 18 candidates will contain the value of 0 for that feature. The mean time difference between a candidate and all 70 combinations resulted in 70 additional features. 10 more features were added by considering the distance between the Kth features and the observations for a ratio constant of 2.5. These features are an indication of the spatial density. 40 more features were added in a later iteration around the most significant nearest neighbor features. K was set at (35, 75, 100, 175, 375) for x-y ratio constants (0.4, 0.8, 1.3, 2, 3.2, 4.5, 6 and 8). The distances of all 40 combinations to the most distant neighbor were also added as features. Distance features are divided by the number of summary observations in order to have similar interpretations for the train and test features.

I further added several features that consider the (smoothed) spatial grid densities. Other location features relate to the place summaries such as the median absolute deviations and standard deviations in x and y. The ratio between the median absolute deviations was added as well. Features were relaxed using additive (Laplace) smoothing with different relaxation constants whenever it made sense using the relaxation constants 20 and 300. Consequently, the relaxed mad for a place with 300 summary observation is equal to the mean of the place mad and the weighted place population mad for a relaxation constant of 300.

Time

The second largest share of the features set belongs to time features. Here I converted all time period counts to period density counts in order to handle the two drops in the time frequency. Periods include 27 two-week periods prior to the end of the summary data and 27 1-week periods prior to the end of the summary data. I also included features that look at the two-week densities looking back between 75 and 1 weeks from the observations. These features resulted in missing values but XGBoost is able to handle them. Additional features were added for the clear yearly pattern of some places.

Weekly counts

Weekly counts

Hour, day and week features were calculated using the historical densities with and without cyclical smoothing and with or without relaxation. I suspected an interaction between the hour of the day and the day of the week and also added cyclical hour-day features. Features were added for daily 15-minute intervals as well. The cyclical smoothing is applied with Gaussian windows. The windows were chosen such that the smoothed hour, hour-week and 15-minute blocks capture different frequencies.

Other time features include extrapolated weekly densities using various time series models (arima, Holt-Winters and exponential smoothing). Further, the time since the end of the summary period was also added as well as the time between the end of the summary period and the last check in.

Accuracy

Understanding accuracy was the result of generating many plots. There is a significant but low correlation between accuracy and the variation in x and y but it is not until accuracy is binned in approximately equal sizes that the signal becomes visible. The signal is more accurate for accuracies in the 45-84 range (GPS data?).

Mean variation from the median in x versus 6 time and 32 accuracy groups

Mean variation from the median in x versus 6 time and 32 accuracy groups

The accuracy distribution seems to be a mixed distribution with three peaks which changes over time. It is likely to be related to three different mobile connection types (GPS, Wi-Fi or cellular). The places show different accuracy patterns and features were added to indicate the relative accuracy group densities. The middle accuracy group was set to the 45-84 range. I added relative place densities for 3 and 32 approximately equally sized accuracy bins. It was also discovered that the location is related to the three accuracy groups for many places. This pattern was captured by the addition of additional features for the different accuracy groups. A natural extension to the nearest neighbor calculation would incorporate the accuracy group but I did no longer have time to implement it.

The x-coordinates seem to be related to the accuracy group for places like 8170103882

The x-coordinates seem to be related to the accuracy group for places like 8170103882

Z-scores

Tens of z scores were added to indicate how similar a new observation is to the historical patterns in the place candidates. Robust Z-scores ((f-median(f))/mad(f) instead of (f-mean(f))/sd(f)) gave the best results.

Most important features

Nearest neighbors are the most important features for the studied models. The most significant nearest neighbor features appear around K=100 for distance constant ratios around 2.5. Hourly and daily densities were all found to be very important as well and the highest feature ranks are obtained after smoothing. Relative densities of the three accuracy groups also appear near the top of the most important features. An interesting feature that also appears at the top of the list relates to the daily density 52 weeks prior to the check in. There is a clear yearly pattern which is most obvious for places with the highest daily counts.

Clear yearly pattern for place 5872322184. The green line goes back 52 weeks since the highest daily count

Clear yearly pattern for place 5872322184. The green line goes back 52 weeks since the highest daily count

The feature files are about 800MB for each batch and I saved all the features to an external HD.

Candidate selection 2

The features from the previous section are used to generate binary classification models on 15 different train batches using XGBoost models. With 100 candidates for each observation, this is a slow process and it made sense to me to narrow down the number of candidates to 20 at this stage. I did not perform any downsampling in my final approach since all zeros (not a match between the candidate and true match) contain valuable information. XGBoost is able to handle unbalanced data quite well in my experience. I did however consider to omit observations that didn’t contain the true class in the top 100 but this resulted in slightly worse validation scores. The reasoning is the same as above: those values contain valuable information! The 15 candidate selection models are built with the top 142 features. The feature importance order is obtained by considering the XGBoost feature importance ranks of 20 models trained on different batches. Hyperparameters were selected using the local validation batches. The 15 second candidate selection models all generate a predicted probability of P(place_match|data), I average those to select the top 20 candidates in the second candidate selection step.

At this point I also dropped observations that belong to places that only have observations in the train/validation period. This filtering was also applied to the test set.

First level learners

The first level learners are very similar to the second candidate selection models other than the fact that they were fit on one fifth of the data for 75 of the 100 models. The other 25 models were fit on 100 candidates for each observation. The 100 base XGBoost learners were fit on different random parts of the training period. Deep trees gave me the best results here (depth 11) and the eta constant was set to (11 or 12)/500 for 500 rounds. Column sampling also helped (0.6) and subsampling the observations (0.5) did not hurt but of course resulted in a fitting speed increase. I included either all 430 features or a uniform random pick of the ordered features by importance in a desirable feature count range (100-285 and 180-240). The first level learner framework was created to handle multiple first level learner types other than XGBoost. I experimented with the nnet and H2O neural network implementations but those were either too slow in transferring the data (H2O) or too biased (nnet). The way XGBoost handles missing values is another great advantage over the mentioned neural network implementations. Also, the XGBoost models are quite variable since they are trained on different random train batches with differing hyperparameters (eta constant, number of features and the number of considered candidates (either 20 or 100)).

Second level learners

The 30 second level learners combine the predictions of the 100 first level models along with 21 manually selected features for all top 20 candidates. The 21 additional features are high level features such as the x, y and accuracy values as well as the time since the end of the summary period. The added value of the 21 features is very low but constant on the validation set and the public leaderboard (~0.02%). The best local validation score was obtained by considering moderate tree depths (depth 7) and the eta constant was set to 8/200 for 200 rounds. Column sampling also helped (0.6) and subsampling the observations (0.5) did not hurt but again resulted in a fitting speed increase. The candidates are ordered using the mean predicted probabilities of the 30 second level XGBoost learners.

Analysis of the local MAP@3 indicated better results for accuracies in the 45-84 range. The difference between local and test validation scores is in large part related to this observation. There seems to be a trend towards the use of devices that show less variation .

Local MAP@3 versus accuracy groups

Local MAP@3 versus accuracy groups

Conclusion

The private leaderboard standing below, used to rank the teams, shows the top 30 teams. It was a very close competition in the end and Markus would have been a well-deserved winner as well. We were very close to each other ever since the third week of the eight-week contest and pushed each other forward. The fact that the test data contains 8.6 million records and that it was split randomly for the private and public leaderboard resulted in a very confident estimate of the private standing given the public leaderboard. I was most impressed by the approaches of Markus and Jack (Japan) who finished in third position. You can read more about their approaches on the forum. Many others also contributed valuable insights.

Private leaderboard score (MAP@3) - two teams stand out from the pack

Private leaderboard score (MAP@3) – two teams stand out from the pack

I started the competition using a modest 8GB laptop but decided to purchase a €1500 workstation two weeks into the competition to speed up the modeling. Starting with limited resources ended up to be an advantage since it forced me to think of ways to optimize the feature generation logic. My big friend in this competition was the data.table package.

Running all steps on my 48GB workstation would take about a month. That seems like a ridiculously long time but it is explained by the extended computation time of the nearest neighbor features. While calculating the NN features I was continuously working on other parts of the workflow so speeding the NN logic up would not have resulted in a better final score.

Generating a ~.62 score could however be achieved in about two weeks by focusing on the most relevant NN features. I would suggest to consider 3 of the 7 distance constants (1, 2.5 and 4) and omit the mid KNN features. Cutting the first level models from 100 to 10 and the second level models from 30 to 5 would also not result in a strong performance decrease (estimated decrease of 0.1%) and cut the computation time to less than a week. You could of course run the logic on multiple instances and further speed things up.

I really enjoyed working on this competition even though it was already one of the busiest periods of my life. The competition was launched while I was in the middle of writing my Master’s Thesis in statistics in combination with a full time job. The data shows many interesting noisy and time dependent patterns which motivated me to play with the data before and after work. It was definitely worth every second of my time! I was inspired by the work of other Kaggle winners and successfully implemented my first two level model. Winning the competition is a nice extra but it’s even better to have learnt a lot from the other competitors, thank you all!

I look forward to your comments and suggestions, please go to my original post to post your comments.

Tom.

How to innovate in the Age of Big Data presented by Stephen Brobst

stephen3

Executive Summer Session

Stephen Brobst will be at the European Data Innovation Hub. We asked him to share his views on the importance of open data, open source, analytics in the cloud and data science. Stephen is on the forefront of the technology and we can’t wait to hear what is happening in the Silicon Valley. Count on it that you will leave the workshop inspired and weaponed with some actionable ideas that will help us to define a profitable strategy for the data science teams.

Format of the session :

  • 15:00 – Keynote:How to innovate in the Age of Big Data
  • 15:50 – Open Discussion on “Sustainable Strategies for Data Science, tackling following topics:
  • Data Science is the Key to Business Success
  • Three Critical Technologies Necessary for Big Data Exploitation
  • How to Innovate in the Age of Big Data
  • 16:45 – Networking Session

Stephen Brobst is the Chief Technology Officer for Teradata Corporation.  Stephen performed his graduate work in Computer Science at the Massachusetts Institute of Technology where his Masters and PhD research focused on high-performance parallel processing. He also completed an MBA with joint course and thesis work at the Harvard Business School and the MIT Sloan School of Management.  Stephen is a TDWI Fellow and has been on the faculty of The Data Warehousing Institute since 1996.  During Barack Obama’s first term he was also appointed to the Presidential Council of Advisors on Science and Technology (PCAST) in the working group on Networking and Information Technology Research and Development (NITRD).  He was recently ranked by ExecRank as the #4 CTO in the United States (behind the CTOs from Amazon.com, Tesla Motors, and Intel) out of a pool of 10,000+ CTOs.

Job – Click@Bike – Senior Data Engineer

Click@Bike  is a promising start-up with European ambitions for the development and distribution of an innovative touristic cycling product-service offer for the hospitality sector. The Company is well capitalised and has extensive international support from the public sector.

To re-enforce its in-house IT development team, the Company is looking to hire a Senior Data Engineer.

About the role: Data Engineer

You will be a Senior Data Engineer responsible for all operational aspects related to the Company’s data, from sourcing through public and commercial tourist channels, over designing a robust schema and data model, to implementing and maintaining the data infrastructure using the latest technologies and best practices, with the aim to provide most current data in a secure, efficient, reliable and scalable manner to support back-end services and front-end user information display services.

The Senior Data Engineer will work together with external data providers. She/he will perform the technical analysis of the specific data interfaces, execute the data translation and integration with the in-house back-end systems by implementing or developing respective Extract, Transform, Load (ETL) solutions, and, together with the Product Manager, roll-out prototypes and final products to customers.

Further to the primary tasks of the Senior Data Engineer, it is the Company’s strategy to broaden the scope towards software engineering tasks in the backend application stack over time to cross-functionalize the IT department.

The Senior Data Engineer reports to the Product Manager; as the first in-house data engineer she/he owns the unique chance to develop from ground zero the Company’s data engineering processes and to manage the Company’s data.

The Senior Data Engineer will establish and lead the data engineering team.

Essential Qualifications

  • Master in IT with 7 years of job related experience
  • Experience as a software engineer, data engineer, data architect or any combination of the roles
  • Software programming skill set and sound knowledge of design patterns
  • Experience in object oriented programming, preferably using Java
  • Deep understanding of database schema design and data structures
  • Experience in data modelling using inter alia entity relationship diagrams, UML
  • Experience with RDBMS: MySQL, PostgreSQL, MS SQL or Oracle
  • Experience with ETL tools, preferably open source, such as Talend, Pentaho
  • Experience with RDBMS spatial extensions
  • Experience with structured data communication: SOAP, REST/XML, JSON
  • Excellent technical communication skills
  • Software modelling, architecture & software services design experience
    • Unified Modelling Language (UML)
    • User stories, use cases
    • System design
    • Functional and technical systems specifications
  • Experience with Back-end and front-end application development
  • Experience in IT project management
  • Knowledge of NoSQL database concepts and their typical use cases
  • Experience in team leadership

Desirable Qualifications

  • Experience in working in an international environment
  • Experience with NoSQL databases such as graph DB (e.g. Neo4j), document DB (e.g. MongoDB) or key-value stores (e.g. Voldemort or CouchDB)
  • Experience with Java Enterprise: J2EE, JSP/JSF, EJB, JDBC, JMS framework
  • Experience with Android development
  • Experience in web development and system setup: Apache/Tomcat, PHP

Office & Development Tools

  • MS Office
    • MS Word, MS Powerpoint: advanced user
    • MS Access, MS Excel: power user
  • Eclipse/Netbeans, Bitbucket/Git/GitHub, Confluence&JIRA,
  • XML Editor, such as XML Spy or Oxygen XML Editor
  • Enterprise Architect
  • PowerDesigner
  • Operating systems: Windows 8.1 or 10, Linux (Ubuntu, Debian, RedHat)

Soft Skills

  • Spirit to work in a start-up environment
  • Good communication skills
  • Good teamwork competencies
  • Sound analytical skills
  • Sound judgement
  • Service and customer oriented

Language Skills

  • We are operating in an international environment; hence a high English proficiency is mandatory.

Additional: fluent in Dutch, French or German, a third language is a plus

Apply:

Make sure that you are a member of the Brussels Data Science Community linkedin group before you apply. Join  here.

Please note that we also manage other vacancies that are not public, if you want us to bring you in contact with them too, just send your CV to datasciencebe@gmail.com .

For further information or to apply for this vacancy, please contact Bart Vandermeeren and include your CV.

Job – Junior Data Scientist

Screenshot 2016-07-01 12.02.02

Are you pursuing a career in data science?

We have a great opportunity for you: an intensive training program combined with interesting job opportunities!

Interested? Check out http://di-academy.com/bootcamp/ follow the link to our datascience survey and send your cv to training@di-academy.com

Once selected, you’ll be invited for the intake event that will take place in Brussels this summer.

Hope to see you there,

Nele & Philippe

job – Python Predictions – data scientists.

Screenshot 2016-06-30 13.51.09

Hi Philippe,

We’re looking for some great new people again.
Would be great if you could give us some visibility for our search.
Candidates can simply send CV and (e)mail of motivation to jobs@pythonpredictions.com
More details in the links or text below
Thanks!!!
Geert
Data Scientist
Python Predictions – Bruxelles Woluwe-Saint-Pierre
Python Predictions is a Brussels-based consulting firm founded in 2006 and specialized in data science and predictive analytics. We are currently looking for data scientists.

Responsibilities

  • In-company data science projects for our clients
  • Contribute to explorative, descriptive and predictive analysis

Required skills or education

  • Proven interest and skills in data science and analytics
  • Proven interest and skills in at least one analytical programming language
  • Work flexibly in rapidly changing environments
  • Good visualisation and communication skills
  • Understand business problems

Personality

  • Analytical mindset
  • Open minded
  • Integrity
  • Critical of the output produced

Language skills

  • Working knowledge of Dutch, French and English

How to apply?
Send us your curriculum vitae and brief letter of motivation.We need both documents in order to consider your application.

More details
http://pythonpredictions.com/jobs/come-mine-with-us/

About us
Why should you apply for a position at Python Predictions? We believe we understand as no others what makes analysts tick. We believe that successful analysts must possess and develop a number of very distinct skills, ranging from social to technical, from intuitive to analytical. Putting these skills to work on real-life analytical projects is rewarding. And we provide a stimulating environment with focus on innovation and cooperation. Find our more about our activities on www.pythonpredictions.com

Job Type: Full-time

Required languages:

  • Dutch
  • English
  • French