Job – Sentiance – Marketing Data Scientist – Antwerp

sentiance_logo_72dpi

Hi Philippe,

Given the topic of the meetup next Thursday, I think the following job opportunity might be relevant to post on your blog ūüôā
At Sentiance we’re looking for a data scientist with experience in market segmentation:
http://www.sentiance.com/team/marketing-data-scientist/
However, we always welcome applications of junior candidates too!
http://www.sentiance.com/team/junior-data-scientist/

Thanks, and hope to see you thursday!
Vincent Spruyt
twitter id: @sentiance

As an experienced data analyst, you are ready to kick-off a new adventure in a fast-paced environment where you can work with the latest machine learning technologies and data science tools.

Job description

  1. You will be part of our Data Science Team and you are passionate about machine learning and data analysis.
  2. Using advanced data analytics, you will form hypotheses and draw meaningful insights about user behavior and user segmentation. As a marketing data scientist, you will explore relations between users and their preferences, discover interesting segments, perform advanced clustering and dimensionality reduction techniques.
  3. You will carry out research that will improve our general understanding of our users, and communicate your findings to other team members in order to initiate new platform development cycles.
  4. You will apply your statistical and mathematical background to real-life big-data problems, and use your machine learning knowledge on a day to day basis.
  5. You will work closely & interact with our Data Engineering Team as your work is used to improve our models and is pushed through our release process.
  6. Your main objectives will be the design and implementation of data mining and analysis algorithms and the communication of reports and quality metrics for current production processes.

Desired Skills & Experience:

  1. You have a masters degree or PhD in computer science or related field.
  2. You are an expert in advanced analytics and are experienced in hypothesis testing.
  3. You possess a deep understanding of clustering, manifold learning and predictive modeling techniques.
  4. You have good knowledge of and experience with any of Python, Matlab or R.
  5. You have a strong mathematical background and analytical mindset.
  6. You are fluent in English. Dutch is a plus.
  7. You can work independently and take matters into your own hands.
  8. The ability to quickly learn new technologies and successfully implement them is essential.

Bonus

Experience with any of the following is considered a plus:

  • Advanced Python knowledge and experience
  • Scikit-learn, Pandas, Numpy, Matplotlib
  • Experience with Spark or the Hadoop eco-system
  • Machine learning, data mining, data visualization

Apply:

Make sure that you are a member of the Brussels Data Science Community linkedin group before you apply. Join  here.

Please note that we also manage other vacancies that are not public, if you want us to bring you in contact with them too, just send your CV to datasciencebe@gmail.com .

Send your job application today! 

Please send Sentiance your resume and a strong motivation with reference sentiance/2015/MDS or apply on LinkedIn.

Job – UZA – Accumulate – Junior research developer – Biomedical Natural Language Processing

 uza

Junior research developer (tijdelijk)

Het ACCUMULATE-onderzoeksproject waarin deze vacature zich situeert, is een IWT-SBO-project rond het ontwikkelen van taaltechnologie voor het extraheren van cruciale medische informatie uit klinische tekst (Biomedical Natural Language Processing), het visualiseren van deze informatie en het demonstreren van de technologie in softwareprototypes voor verschillende health information analytics use cases en voor verschillende doelgroepen. Het project is een samenwerking met KU Leuven, UZ Leuven en Universiteit Antwerpen.

De rol van het UZA in ACCUMULATE bestaat erin innovatieve softwareprototypes te ontwikkelen en evalueren in samenspraak met artsen, onderzoekers en farmaceutische bedrijven. De dienst ICT van het UZA heeft een voltijdse vacature voor een research developer om deze prototypes te ontwikkelen. De kandidaat start op 1 januari 2016 of zo snel mogelijk na die datum.

Functiebeschrijving

  • je staat in voor het ontwikkelingstraject van 1) een technische backbone waarin je de binnen het project ontwikkelde taaltechnologiemodules op flexibele wijze kan integreren en 2) health information analytics softwareprototypes die gebruik maken van die modules. Daarnaast ben je sterk betrokken bij de ontwikkeling van taaltechnologie en sta je mee in voor de evaluatie van de softwareprototypes;
  • je¬†onderzoekt voor- en nadelen van verschillende technologie√ęn en bent in staat deze flexibel in te zetten, rekening houdend met de use case en bijhorende doelgroep;
  • je gaat actief op zoek naar nieuwe technologie√ęn en innovaties die kunnen toegepast worden in het project;
  • op het UZA werk je nauw samen met projectleiders en systeembeheerders van de dienst ICT en met de senior onderzoeker in het project;
  • je¬†neemt actief deel aan de projectvergaderingen en overlegt met de andere projectpartners en de Industrial Advisory Board van het project over de ontwikkeling van de prototypes;
  • je krijgt de kans om aan je eigen doctoraatsonderzoek te werken, onder begeleiding van promotoren prof. dr. Walter Daelemans (CLiPS research center, UAntwerpen) en dr. Kim Luyckx (ICT, UZA). Als doctoraatsstudent publiceer je over je onderzoek en presenteer je voor een wetenschappelijk publiek.

Functievereisten

  • je behaalde een master in de informatica, computationele taalkunde, ingenieurswetenschappen of een andere relevante discipline;
  • je hebt een analytische geest en kan zelfstandig werken;
  • je hebt interesse in wetenschappelijk onderzoek;
  • je bent vertrouwd met Linux-omgevingen en open-source software;
  • je hebt ¬†ervaring met verschillende programmeertalen (vb. Java, Python) en je hebt ervaring met datamodellering;
  • je ¬†hebt ervaring met databases (vb. MySQL, PostgresQL, MongoDB) en applicatieservers zoals Tomcat;
  • onderstaande aspecten zien we als een meerwaarde: (1) je hebt ervaring in wetenschappelijk onderzoek, (2) je hebt werkervaring, (3) je hebt ervaring met tools voor real-time en scalable computing;
  • je hebt ervaring met het bouwen van prototypes of webservices (vb. hobby- of professioneel project in Github).

Apply:

Make sure that you are a member of the Brussels Data Science Community linkedin group before you apply. Join  here.

Please note that we also manage other vacancies that are not public, if you want us to bring you in contact with them too, just send your CV to datasciencebe@gmail.com .

Kandidaatstelling

Ben je ge√Įnteresseerd, solliciteer dan voor 30 september¬†2015 online via www.uzatrektaan.be, tel nr 03 821 38 99.

Job – ULB – Postdoc in Machine Learning – 2 years

ULB

ULB MLG Brussels: Postdoc in machine learning, data science and big data for security (e.g. fraud detection)

2 year postdoc position

Description

Research in big data and scalable machine learning with application to security problems (e.g. fraud detection) in the context of a project funded by Brussels Region.

http://mlg.ulb.ac.be/BruFence

Required skills:

  • you have a PhD in Machine Learning, Computational Science, (Bio)Engineering, Data Science, or equivalent.
  • Expertise in statistical machine learning, data mining, big data, map reduce, Spark, python, R programming.
  • Plus: expertise in application of big data mining to real problems, security applications, notably credit card fraud detection
  • You are fluent in English.
  • The successful applicant will be hosted by the Machine Learning Group, co-headed by Prof. Gianluca Bontempi.

    Starting date: asap
    For more information please contact Pr. Gianluca Bontempi, mail: gbonte@ulb.ac.be.
    Please send your CV, motivation letter and contact information for three references, publication list with indication of the citation number of each published paper.

Nr of positions available : 1

Research Fields

Computer science – Modelling tools

Career Stage

Experienced researcher or 4-10 yrs (Post-Doc)

Research Profiles

First Stage Researcher (R1)

Comment/web site for additional job details

mlg.ulb.ac.be

Job – Infofarm – DataScientist

Infofarm

InfoFarm breidt uit en is op zoek naar een nieuwe Data Scientist!

BEDRIJFSPROFIEL

InfoFarm is een Data Science bedrijf dat zich toespitst in het opleveren van kwaliteitsvolle Data Science en Big Data oplossingen aan haar klanten.¬†Onze naam danken we aan √©√©n van de vele informele brainstormsessies onder collega’s die spontaan tijdens de middagpauze ontstaan.¬†Een gezellige sessie later hadden we de hele analogie met het boerderijleven op poten: we planten idee√ęn, we ploegen door onze klant zijn data, laten deze groeien met andere data of inzichten en oogsten business waarde door er verschillende (machine learning) technieken op toe te passen.

We hebben een uniek team met verscheidene talenten en verschillende achtergronden: Data Scientists (mensen met een onderzoek achtergrond uit een kwantitatieve richting, Big Data Developers (sterk technische Java programmeurs) en Infrastructuurmensen (de bits-and-bytes mensen). Wij ontwikkelen samen geweldige oplossingen voor onze klanten uit verschillende sectoren. Om ons team te versterken zijn we op zoek naar een Big Data Developer. 

FUNCTIEOMSCHRIJVING

Als Data Scientist exploreer je datasets, verschaf je inzichten en help je klanten actie te ondernemen gebaseerd op deze inzichten. Je werkt zelfstandig of in een gemengd team, ofwel in onze kantoren ofwel in detachering bij de klant. Je bent niet bang om met creatieve oplossingen voor complexe problemen naar voren te treden. Je gidst onze Big Data Developers in het bouwen van Big Data applicaties gebaseerd op de inzichten die jij verkregen hebt. Je zal in verschillende sectoren en omgevingen belanden. De ene dag werk je voor een telecom bedrijf, om de dag nadien het waterzuivering systeem van Belgi√ę beter te leren kennen en ten slotte ook nog een Big Data applicatie in de logistieke sector te bouwen. Bij InfoFarm zijn geen twee projecten gelijkaardig, maar dat schrikt je niet af. Je kijkt er naar uit om bij te leren over verschillende businessen en om nieuwe ontwikkelingen en technologie√ęn op de markt te volgen, alsook om¬† deze opgedane kennis uit te dragen naar onze klanten en binnen het team.¬†

FUNCTIEVEREISTEN

  • Je hebt een master diploma in een kwantitatieve richting (wiskunde, ingenieur, ‚Ķ). Een doctoraat is een pluspunt.
  • Kennis van een data analyse taal (R, Python, ‚Ķ) geeft je een voorsprong. Bereidheid om een van deze talen te leren is een vereiste.
  • Kennis van SQL is een voordeel.
  • Het leren begrijpen van Big Data tools (Hadoop, Hive, Pig, Spark, Spark MLlib, ‚Ķ) schrikt je niet af.
  • Kennis van Java en Scala vormen een meerwaarde.

Apply:

Make sure that you are a member of the Brussels Data Science Community linkedin group before you apply. Join  here.

Please note that we also manage other vacancies that are not public, if you want us to bring you in contact with them too, just send your CV to datasciencebe@gmail.com .

Bekijk de volledige job informatie hieronder en stuur als antwoord je CV naar jobs@infofarm.be!

(An English version can be requested via jobs@infofarm.be)

check out the original post: http://www.infofarm.be/articles/were-hiring-data-scientist

Free Training – Python for Data Science – Brussels

In the past few years, Python has emerged as a solid platform for data science. ¬† Couple a mature, clean and expressive language with powerful, fully-featured libraries for data wrangling and machine learning, and you’re set up for maximum productivity.¬† Easily ingest your data from practically anywhere using one of Python’s thousands of free libraries.¬† Effortlessly turn hundreds of convoluted lines of obscure model code into just a few lines of near-English prose.¬† Add a few annotations and get maximum performance without drowning in pools of unnecessary boilerplate code.¬† Present your results in beautiful living notebooks that seamlessly mix text, code and graphs.¬† Whether you do all your modeling in R, you’ve written nothing but Matlab since university, or you swear by C# or (gasp!) Java, discovering Python will be a wonderful experience.

In detail, we plan to cover the following points:

  • Quick history of Python and typical use cases
  • Key advantages and disadvantages of Python for data science
  • Ways to run python and write code
  • Quick tour of language
  • Showcase of useful language packages for data science: NumPy, Matplotlib, SciPy, Pandas, Scikit-learn, PySpark, PyHive.¬† Accessing RDBMSs
  • Writing efficient Python: Cython, Numba, SWIG
  • Pointers for further learning

Registration: http://www.meetup.com/Open-Data-Innovation-Training-Hub/events/225090704/

FullSizeRender (1)The course will be taught by Patrick Varilly of Data Minded.  Patrick fell in love with Python four years ago as a theoretical chemistry post-doc at Cambridge and has never looked back.  He has contributed to SciPy and used Python in wide-ranging settings, from scientific libraries to model proof-of-concepts to data backend pipelines.

Pizza will be provided courtesy of Data Minded.

Patrick is a co-worker at the European Data Innovation Hub. He has coached the Apache Spark MOOCs this summer and will be teaching a full series of Python course this fall.

More #datascience trainings in Brussels:

Why don’t you join one of our ¬†#datascience trainings in order to sharpen your skills.

Special rates apply if you are a job seeker.

Here are some training highlights for the coming months:

Check out the full agenda here.

Have you been to our Meetups yet ?

Each month we organize a Meetup in Brussels focused on a specific DataScience topic.

Brussels Data Science Meetup

Brussels, BE
1,336 Business & Data Science pro’s

The Brussels Data Science¬†Community:Mission: ¬†Our mission is¬†to educate, inspire and empower scholars and professionals to apply data sciences to address humanity‚Äôs grand cha…

Next Meetup

Event – SAPForum – Sept9 @Tour & Taxis, Brussels

Wednesday, Sep 9, 2015, 12:00 AM
5 Attending

Check out this Meetup Group →

Coached Mooc – Introduction to Big Data with Apache Spark

mooc coaching  spark-logo

What

  • Learn how to apply data science techniques using parallel programming in Apache Spark to explore big (and small) data.
  • Study online but work in group
  • Get help from a local expert

Why we coach MOOCs

The European Data Innovation Hub is partnering with top experts to offer MOOC participants the possibility to do these online courses in group. During the duration of the Mooc participants will be welcome to come to the Hub in Brussels to work and to go through exercises with other participants. On specific days one or more domain expert will be present to coach the students.

Planning

  1. Sign up to this course here
  2. Join the meetup group here

About this course

Organizations use their data for decision support and to build data-intensive products and services, such as recommendation, prediction, and diagnostic systems. The collection of skills required by organizations to support these functions has been grouped under the term Data Science. This course will attempt to articulate the expected output of Data Scientists and then teach students how to use PySpark (part of Apache Spark) to deliver against these expectations. The course assignments include Log Mining, Textual Entity Recognition, Collaborative Filtering exercises that teach students how to manipulate data sets using parallel processing with PySpark.

This course covers advanced undergraduate-level material. It requires a programming background and experience with Python (or the ability to learn it quickly). All exercises will use PySpark (part of Apache Spark), but previous experience with Spark or distributed computing is NOT required. Students should take this Python mini-quiz before the course and take this Python mini-course if they need to learn Python or refresh their Python knowledge.

What you’ll learn

  • Learn how to use Apache Spark to perform data analysis
  • How to use parallel programming to explore data sets
  • Apply Log Mining, Textual Entity Recognition and Collaborative Filtering to real world data questions
  • Prepare for the Spark Certified Developer exam

Meet the online instructor:

bio for Anthony D. Joseph

Anthony D. Joseph

Meet the coach:

Kris Peeters

Kris Peeters from Dataminded

Certificate

Pursue a Verified Certificate to highlight the knowledge and skills you gain ($50)

View a PDF of a sample edX certificate
  • Official and Verified

    Receive a credential signed by the instructor, with the institution logo to verify your achievement and increase your job prospects

  • Easily Shareable

    Add the certificate to your CV, resume or post it directly on LinkedIn

  • Proven Motivator

    Get the credential as an incentive for your successful course completion

Job opportunities ?

hidden-jobs1

Click here for Data related job offers.
Join our community on linkedin and attend our meetups.
Follow our twitter account: @datajobsbe

Have you been to our Meetups yet ?

Each month we organize a Meetup in Brussels focused on a specific DataScience topic.

Brussels Data Science Meetup

Brussels, BE
1,239 Business & Data Science pro’s

The Brussels Data Science¬†Community:Mission: ¬†Our mission is¬†to educate, inspire and empower scholars and professionals to apply data sciences to address humanity‚Äôs grand cha…

Next Meetup

Launch MOOC Coaching activities, First course is the Machine…

Thursday, May 28, 2015, 7:00 PM
15 Attending

Check out this Meetup Group →

Repost – Vincent Granville – 20 short tutorials all data scientists should read (and practice)

The new, completed version of this Data Science Cheat Sheet can be found here.

We are now at 20, up from 17. I hope I find the time to write a one-page survival guide for UNIX, Python and Perl. Here’s one for R. The links to core data science concepts are below –¬†I need to add links to web crawling, attribution modeling and API design. Relevancy engines are discussed in some of the tutorials listed below. And that will complete my 10-page cheat sheet for data science.

Here’s the list:

  1. Tutorial: How to detect spurious correlations, and how to find the …
  2. Practical illustration of Map-Reduce (Hadoop-style), on real data
  3. Jackknife logistic and linear regression for clustering and predict…
  4. From the trenches: 360-degrees data science
  5. A synthetic variance designed for Hadoop and big data
  6. Fast Combinatorial Feature Selection with New Definition of Predict…
  7. A little known component that should be part of most data science a…
  8. 11 Features any database, SQL or NoSQL, should have
  9. Clustering idea for very large datasets
  10. Hidden decision trees revisited
  11. Correlation and R-Squared for Big Data
  12. Marrying computer science, statistics and domain expertize
  13. New pattern to predict stock prices, multiplies return by factor 5
  14. What Map Reduce can’t do
  15. Excel for Big Data
  16. Fast clustering algorithms for massive datasets
  17. Source code for our Big Data keyword correlation API
  18. The curse of big data
  19. How to detect a pattern? Problem and solution
  20. Interesting Data Science Application: Steganography

Other Cheat Sheets

Vincent’s Cheat Sheets for Perl, R, Excel (includes Linest, Vlookup), Linux, cron jobs, gzip, ftp, putty, regular expressions, Cygwin, pipe operators, files management, dashboard design etc. coming soon

Cheat Sheets for Python

Cheat Sheets for R

Cross Reference between R, Python (and Matlab)

Cheat Sheets for SQL

Additional

Related link: The Data Science Toolkit

Other interesting links

Job – KBC – Group Data Scientist

KBC asked us to publish yet another job opportunity.

 

logo_kbc        kbc nine to life         best workplace

KBC Group Data Scientist

One of the largest bancassurers in Belgium and in the emerging markets of Central Europe, KBC is active in retail banking, insurance, private banking, asset management and corporate banking.
KBC Group employs around 37 000 people and serves approximately 9 million customers.
For more information about the KBC Group surf to www.kbc.be 

Job description

Objective of the role

In a changing market environment, where customer needs are more diverse and customer expectations are more personalized, KBC group wants to optimally use the growing ‚Äúdata footprint‚ÄĚ of the market to become more customer centric. and become a reference in data analytics. Ultimately, we aim to transform our business into a data driven group.

KBC therefore wants to attract internal capabilities that are highly advanced in exploiting, analysing and modelling data.

Background & Place in the organisation

The recruiting group will operate on Corporate level, working in support of all Business Units;

They will be accountable for:

  • Becoming the reference for Big Data expertise within the Group with regards to Data Analysis & Modelling
  • Overseeing a portfolio of initiatives, ensuring both knowledge sharing as well as supporting the installed corporate governance body
  • Feeding local Business Units with unknown insights based on data and where required, assisting them with the commercial activation of these insights
  • Testing commercial hypothesis as suggested by Business Units or from within the own group
  • Ensuring the establishment and evolution of a knowledge and technology base throughout the Group
  • It will therefore have a close collaborative relationship with KBC IT.
  • Establishing and exploiting partnerships outside KBC, academic and commercial.

The group will not be accountable for monetising the data insights within the market – Business Units are accountable for this.
The personal objectives will be derived from the group objectives.

Location

Louvain, with regular trips to other KBC headquarters internationally.

Profile and Education

Educational background (academic or otherwise)

Master’s degree in a technical study such as mathematics, applied science, computer science, physics, chemistry, econometrics, actuary, etc.

Technical Skills (key skills in bold)

  • Fundamentals: SQL, OLAP, JSON and XML
  • Statistics: Exploratory data analysis, Bayesian statistics, Probability Theory, Regression, Correlation, Monte Carlo, hypothesis testing
  • Programming: Python/R/SPSS, Web scraping, reading various kinds of data, Java, .NET, C#
  • Machine learning: Supervised/Unsupervised learning, predictive algorithms, classifiers, association, regression, Trees, KNN etc.
  • Text mining / NLP: Text analysis, feature extraction, SVM
  • Visualization: Tableau, Spotfire, d3.js, Shiny, ggplot2
  • Big Data: Map reduce, HDFS, Data replication
  • Data Ingestion: Data discovery, data sources & acquisition, data integration, data fusion, how much data?
  • Data Mining: PCA, denoising, feature extraction, handling missing values, data scrubbing, dimensionality and numerosity reduction

Non-Technical Skills

  • Entrepreneurial, self demanding and a constant learner, persistent, innovative
  • Self (Project) management skills
  • Besides a natural curiosity and creativity for problem solving, you are pragmatic and communicative.
  • This unique combination makes you a good team player, but you are also able to work independently on¬†different cases.
  • You can clearly present and visualize your work to others.
  • You have obtained these skills through¬†several years of working experience and/or PhD research.
  • Knowledge in the field of financial services, marketing or client behavior is an advantage (banking industry, telecom)
  • Client hand, i.e. can clearly communicate the outcomes of complicated analyses. Making complex things simple without dropping the essence of the problem.

What KBC offers

You can count on KBC for:

  • active support during your career,
  • an exceptional range of training and development opportunities,
  • many different career opportunities,
  • a permanent contract,
  • a competitive salary package, including an extensive package of additional benefits and special terms for employees for our banking and insurance products,
  • possibilities to integrate your work and private life,
  • a dynamic working environment with an open culture and pleasant atmosphere.

How to apply?

Apply online with the application form on this website.

 

More Jobs ?

hidden-jobs1

Click here for more jobs offers

Check out our twitter account: @datajobsbe

Job – Netlog/Twoo/Stepout – Data Scientist – Gent

2_netlog_logo_full_bw stepout Logo twoo

Massive Media is the social media company behind the successful digital brands Netlog.com and Twoo.com. In November 2013 Massive Media bought and relaunched the social discovery platform Stepout. We enable members to meet nearby people instantly. Over 100 million people have joined our sites on web and mobile. Check it out & apply  here.

 

Data Scientist- Massive Media MatchGent

We want to add some fresh talent to our data team to make sure it can fully continue its mission of turning the huge amounts of data we gather into gold.

Are you fascinated with big data technologies such as Hadoop and HBase?

Can you impress us with a solid technical background and substantial Python and SQL knowledge?

Are you familiar with the UNIX shell and common web technologies like Javascript and HTML?

Did you get blessed with a healthy interest in data visualisation, statistics and machine learning?

Does an agile and fast-paced development atmosphere sound like your perfect work environment?

Do you have the creativity, drive and discipline to get things done?

If your answer to all of these questions is “Yes, show me the data!” then we have a great job for you. Apply now and become part of an exceptional team of data scientists who are determined to teach you everything there is to know in one of the most exciting areas of computer and information science!

More Jobs ?

hidden-jobs1

Click here for more jobs offers

Check out our twitter account: @datajobsbe

Job – Amplidata – Gent – Big Data System Engineer IWT/CAP

amplidata_logo_transp-hi-res

 

 

Big Data System Engineer IWT/CAP (M/V)

Vacature bij Amplidata

Orriginal post: http://regiojobs.hln.be/detail_job/6915162_big-data-system-engineer-iwtcap-mv_lochristi.jsp

online sinds 03/07/2014
Plaats: Lochristi
Ervaring: Geen ervaring vereist
Sector: ICT, Telecommunicatie & Internet
Jobtype: Voltijds
Contracttype: Contract onbepaalde duur
Taalkennis: Engels

Solliciteer online

Functieomschrijving

 

Amplidata is the sole European company that builds the Big Data cloud storage of tomorrow. Our customers are the largest Telcos, datacenter providers and governments in the world.

Our recruitmentvideo:

Work together with the top-class engineers in ICT and Storage and live in a world of congenial minds and report directly to the CTO. As a team, we eat complex challenges for lunch, like building a high-performance Big Data Systems.

You will be part of an IWT/CAP research project led by Amplidata in cooperation with an important industry partner and with Sirris, the collective centre of the Belgian technological industry.

You will be responsible for helping engineer and integrate Hadoop-based systems within our AmpliStor Cloud Storage-technology whilst further researching, testing and implementing new technologies.

You will work with technologies such as Hadoop, Python, the Amazon S3 API and REST/HTTP.

The ideal candidate will be passionate about up and coming technologies and love experimenting with new technology.

 

Functie

classificatie: Ict & Internet
sector: ICT, Telecommunicatie & Internet

Profiel

Software development is not just a job like any other job; it must be a passion for you. Besides the daily job, you have one or more ‘ICT pet projects’; you have at least 2 to 3 years relevant experience in a Linux, C++ and Object-Oriented design and development environment.

 

· You like to regularly explore new technologies.

· You have a Master degree in ICT and/or Electronics.

· You have proven proficiency in operating systems, file systems, networks, databases and multi-threaded programming in Unix/Linux environments.

Jobgerelateerde competenties

  • Behoeften van de klant of gebruiker analyseren
    Het functioneel lastenboek opstellen (specificaties, termijnen, kosten, …)

Persoonsgebonden competenties:

  • plannen en organiseren
  • nauwkeurig werken
  • samenwerken
  • zelfstandig werken
  • resultaatgericht werken

Jij beschikt over

taalkennis: Engels
ervaring: Geen ervaring vereist
opleidingsniveau: Master

Aanbod

We offer you a high-tech playground to explore all facets of ICT: from software analysis, design and ‘from scratch’ development to extensive quality testing and engineering of new software components.

You are the key of success for Amplidata. That’s why you will get a very attractive remuneration: fixed salary (75 th percentile) , a decent company car, a complete insurance package, stock options, meal vouchers, a GSM and an Internet subscription.

Amplidata saves no energy and time to take you and your knowledge to the next level. We complete the package with peer-programming, on-the-job coaching and enough time for research.

In return we expect 100% commitment to your job, in which you proactively dive in the secrets of our technology.

Oh by the way, our office hours are flexible. One day a week you can work from home, you have the freedom to plan your own workday and with our ironing service, that horrendous job is now history for you or your partner.

Wij bieden jou

contracttype: Contract onbepaalde duur
jobtype: Voltijds

Reageer op deze job

Antwerpsesteenweg 19
9080 Lochristi
 
 
 
Solliciteer online