Job – Infofarm – Big Data Developer

Infofarm

InfoFarm breidt uit en is op zoek naar een nieuwe Big Data Developer!

BEDRIJFSPROFIEL

InfoFarm is een Data Science bedrijf dat zich toespitst in het opleveren van kwaliteitsvolle Data Science en Big Data oplossingen aan haar klanten. Onze naam danken we aan één van de vele informele brainstormsessies onder collega’s die spontaan tijdens de middagpauze ontstaan. Een gezellige sessie later hadden we de hele analogie met het boerderijleven op poten: we planten ideeën, we ploegen door onze klant zijn data, laten deze groeien met andere data of inzichten en oogsten business waarde door er verschillende (machine learning) technieken op toe te passen.

We hebben een uniek team met verscheidene talenten en verschillende achtergronden: Data Scientists (mensen met een onderzoek achtergrond uit een kwantitatieve richting, Big Data Developers (sterk technische Java programmeurs) en Infrastructuurmensen (de bits-and-bytes mensen). Wij ontwikkelen samen geweldige oplossingen voor onze klanten uit verschillende sectoren. Om ons team te versterken zijn we op zoek naar een Big Data Developer. 

FUNCTIEOMSCHRIJVING

Als Big Data Developer ontwikkel je voornamelijk Big Data applicaties op het Apache Hadoop of Apache Spark platform. Je werkt zelfstandig of in een gemengd team, ofwel in onze kantoren ofwel in detachering bij de klant. Je bent niet bang om met creatieve oplossingen voor complexe problemen naar voren te treden. De ene dag werk je voor een telecom bedrijf, om de dag nadien het waterzuivering systeem van België beter te leren kennen en ten slotte ook nog een Big Data applicatie in de logistieke sector te bouwen. Bij InfoFarm zijn geen twee projecten gelijkaardig, maar dat schrikt je niet af. Je kijkt er naar uit om bij te leren over verschillende businessen en om nieuwe ontwikkelingen en technologieën op de markt te volgen, alsook om  deze opgedane kennis uit te dragen naar onze klanten en binnen het team. 

FUNCTIEVEREISTEN

Je hebt minstens 2-3 jaar ervaring met Java ontwikkeling. Certificaties vormen een meerwaarde.

Je kan werken met Maven, Spring of EJB en één of meer RDBMS.

Kennis van Hadoop, Hive en Pig zijn een pluspunt, net als kennis van Spark en Spark MLlib. Bereidheid om je te certifiëren in een van deze domeinen is noodzakelijk.

Kennis van R en Scala zijn een voordeel.

Je hebt op zijn minst een Bachelor in Applied Computer Sciences. 

Apply:

Make sure that you are a member of the Brussels Data Science Community linkedin group before you apply. Join  here.

Please note that we also manage other vacancies that are not public, if you want us to bring you in contact with them too, just send your CV to datasciencebe@gmail.com .

Bekijk de volledige job informatie hieronder en stuur als antwoord je CV naar jobs@infofarm.be!

(An English version can be requested via jobs@infofarm.be)

check out the original post: http://www.infofarm.be/articles/were-hiring-big-data-developer-0

Advertisements

Training – Spark4Devs – by Andy Petrella + Xavier Tordoir from Data Fellas

spark-logo

main page

Woot Woot – The Hub is hosting the Spark4Devs training from Data Fellas

Why you should come to Spark4 Devs

Why now

Nowadays, for developers, one of the rare areas where there is still a lot of things to discover and to do is Big Data, or more precisely, Distributed Engineering.

This is an area where a lot of developers have been around for a while, but this area is also slightly moving towards Distributed Computing on Distributed Datasets.

Not only Machine Learning is important in this area, but certainly, core development capabilities are required to make the production processes Scalable, Highly Available whilst keeping Reliability and Accuracy of the results.

Clearly Scala has its role to play there, Typesafe is spreading the word and were precursors on that, but Data Fellas and BoldRadius Solutions are on the same page. Scala is to be the natural successor of Python, R or even Julia in Data Science, with the introduction of the Distributed facet.

That’s why you, devs but of course data scientists, you want to come to this training on Apache Spark focusing on the Scala language.

Why Spark 4 Devs

Because, the Apache Spark project is THE project you need for all you data project, actually, even if the data is not distributed!

The training is three-days, hence the first day will tackle the core concepts, batch processing. We will start by introducing why Spark is the way to go then we’ll explain how to hit this road by hacking some datasets using it.

The second day will, in the same pragmatic and concrete manner, introduce the Spark Streaming library. Of course, we’ll see how to crack some tweets but also show how a real broker can be consumed, that is, Apache Kafka.

Not only will you be on track to use Apache Spark in you project, but since the Spark Notebook will be used as the main driver for our analyses, you’ll be on track to use interactively and proficiently Apache Spark with a shortened development lifecycle.

Title

Spark 4 Devs

subtitle Discover and learn how to manage Distributed Computing on Distributed Datasets in a Bigdata environment driven by Apache Spark.
 Date  23,24,25 September

Target audience

Mainly developers that want to learn about distributed computing and data processing

Duration

3 days

Location

European Data Innovation Hub @ AXA, Vorstlaan 23, 1170 Brussel

Price

500€ /per day /per trainee

audience

minimum 8 – max 15 people

Registration

via http://spark4devs.data-fellas.guru/

Motivation

http://blog.bythebay.io/post/125621089861/why-you-should-come-to-spark4-devs

Overview

via http://spark4devs.data-fellas.guru/

Learning Objectives

via http://spark4devs.data-fellas.guru/

Topics

Distributed Computing, Spark, Spark Streaming, Kafka, Hands on, Scala

Prerequisites

developing skills in common languages

About the trainer:

Andy

Andy is a mathematician turned into a distributed computing entrepreneur.
Besides being a Scala/Spark trainer. Andy also participated in many projects built using spark, cassandra, and other distributed technologies, in various fields including Geospatial, IoT, Automotive and Smart cities projects.
He is the creator of the Spark Notebook (https://github.com/andypetrella/spark-notebook), the only reactive and fully Scala notebook for Apache Spark.
In 2015, Xavier Tordoir and Andy founded Data Fellas (http://data-fellas.guru) working on:
* the Distributed Data Science toolkit, Shar3
* the Distributed Genomics product, Med@Scale

After completing a Ph.D in experimental atomic physics, Xavier focused on the data processing part of the job, with projects in finance, genomics and software development for academic research. During that time, he worked on timeseries, on prediction of biological molecular structures and interactions, and applied Machine Learning methodologies. He developed solutions to manage and process data distributed across data centers. Since leaving academia a couple of years ago, he provides services and develops products related to data exploitation in distributed computing environments, embracing functional programming, Scala and BigData technologies.

#datascience training programmes

Why don’t you join one of our  #datascience trainings in order to sharpen your skills.

Special rates apply if you are a job seeker.

Here are some training highlights for the coming months:

Check out the full agenda here.

Have you been to our Meetups yet ?

Each month we organize a Meetup in Brussels focused on a specific DataScience topic.

Brussels Data Science Meetup

Brussels, BE
1,328 Business & Data Science pro’s

The Brussels Data Science Community:Mission:  Our mission is to educate, inspire and empower scholars and professionals to apply data sciences to address humanity’s grand cha…

Next Meetup

Event – SAPForum – Sept9 @Tour & Taxis, Brussels

Wednesday, Sep 9, 2015, 12:00 AM
2 Attending

Check out this Meetup Group →

Data Science Trainings Belgium

Datascience - Training calendar datascience training

scalaSpark  neo4j_logo_globe sqlLogo R spark mooc business-analytics-with-r-online-training

The European Data Innovation Hub facilitates  a full series of Data Science and Big Data training programmes organized by its partners.

You can expect

  • a series of executive training to support your management in understanding the benefits of analytics
  • a series of coached MOOCs on machine learning and big data technology
  • a series of hands-on training on the different datascience technologies

All members of the European Data Science and Big Data communities are welcome to use our Brussels based professional facilities to give their training. The members of the hub will promote your training and include it on our e-learning platform for further use.

The full list is available here.

Here are some highlights for the coming months:

Check out the full agenda here.

How to get the best price:

You can always use Eventbrite to order and pay for your ad-hoc trainings but if you want to benefit from volume discounts then you could contact Philippe on 0477/23.78.42 | pvanimpe@dihub.eu .

Have you been to our Meetups yet ?

Each month we organize a Meetup in Brussels focused on a specific DataScience topic.

Brussels Data Science Meetup

Brussels, BE
1,608 Business & Data Science pro’s

The Brussels Data Science Community:Mission:  Our mission is to educate, inspire and empower scholars and professionals to apply data sciences to address humanity’s grand cha…

Next Meetup

IBM Bluemix and Analytics – Introduction

Tuesday, Feb 9, 2016, 6:30 PM
22 Attending

Check out this Meetup Group →